dev-docs/RFCs/v4.1/event-handling-rfc.md
Notes:
Note: This was copy pasted from a google doc, so formatting needs some work.
We are currently building up about 4 or 5 different almost identical event handling systems (luma.gl, react-map-gl’s new InteractiveMap and StaticMap, deck.gl viewport controllers and main component).
Some of this event handling is for viewport interaction (essentially manipulating the view matrix) and some event handling is for model interaction (hovering/clicking/dragging) features.
These event handling solutions will all eventually need the same features and bug fixes (touch event support, support screen offset relative coordinates, etc), so duplicating code in so many places is an issue.
The user will want to compose and customize these event handlers in flexible ways. If we have 5 different implementations with subtly different APIs and behaviours, this will not help.
How will these event handlers interact? If a user embeds a model interaction handler that supports dragging data in a viewport controller that also supports dragging (but for panning or rotating), will (should?) the model interaction handler still get called?
This RFC was created because there now seems to be an opportunity to come up with a clean event handling architecture that covers all cases and shares code and principles.
The alternative is that we keep making incremental improvements in all of these places and end up with a patchwork of slightly different APIs with different capabilities and bugs.
For deck.gl v5 etc, there are a number of development tracks that relate to event handling for our mapping/infovis framework stack.
All of the above should be completely reusable ES6 classes. On top of this we’d provide:
Note that the event model must support chaining:
Event registration class:
Event handler registration code is currently duplicated in luma.gl, deck.gl and react-map-gl, and keeps getting duplicated in new efforts.
This is the event manager from react-map-gl
export default class EventManager {
constructor(canvas, {
onMouseMove = noop,
onMouseClick = noop,
onMouseDown = noop,
onMouseUp = noop,
onMouseRotate = noop,
onMouseDrag = noop,
onTouchStart = noop,
onTouchRotate = noop,
onTouchDrag = noop,
onTouchEnd = noop,
onTouchTap = noop,
onZoom = noop,
onZoomEnd = noop,
mapTouchToMouse = true,
pressKeyToRotate = false
} = {}) {
...
}
}
If Touch handlers not provided, touch events will call Mouse event handlers, simplifying app.
Based on the Viewport Support for Infovis RFC we have now implemented) a set of interchangeable viewports in deck.gl, that extends deck.gl to support non-geospatial use cases.
There are different controllers:
All controllers should be able to generate a basic Viewport and can be used for infovis
Controllers work with the state roundtrip paradigm (they take a couple of parameters, listen to events, call a callback with updated parameters). These classes should not deal directly with user input events, but handle semantic transforms of the viewports. Because the controllers are so generic (React independent) they could potentially be moved to luma.gl. (luma.gl/src/controllers, deck.gl could add some in deck.gl/src/controllers).
export default class MercatorControlState {
static propTypes = {
width: PropTypes.number.isRequired, // The width of the map
height: PropTypes.number.isRequired, // The height of the map
latitude: PropTypes.number.isRequired, // The latitude of the center of the map.
longitude: PropTypes.number.isRequired, // The longitude of the center of the map.
zoom: PropTypes.number.isRequired, // The tile zoom level of the map.
bearing: PropTypes.number, // Specify the bearing of the viewport
pitch: PropTypes.number, // Specify the pitch of the viewport
altitude: PropTypes.number, // Altitude of viewport camera. Unit: map heights, default 1.5
maxZoom: PropTypes.number,
minZoom:
maxPitch:
minPitch:
startDragLngLat: PropTypes.arrayOf(PropTypes.number), // Position when current drag started
startBearing: PropTypes.number, // Bearing when current perspective drag started
startPitch: PropTypes.number, // Pitch when current perspective drag operation started
};
// Returns an Viewport instance
getViewport() {}
// Returns a new state object for chaining
panStart() {}
pan() {}
panEnd() {}
rotateStart() {}
rotate() {}
rotateEnd() {}
zoomStart() {}
zoom() {}
zoomEnd() {}
}
export default class OrbitControllerState {
static propTypes = {
// target position
lookAt: PropTypes.arrayOf(PropTypes.number),
// camera distance
distance: PropTypes.number.isRequired,
minDistance: PropTypes.number,
maxDistance: PropTypes.number,
// rotation
rotationX: PropTypes.number,
rotationY: PropTypes.number,
// field of view
fov: PropTypes.number,
// viewport width in pixels
width: PropTypes.number.isRequired,
// viewport height in pixels
height: PropTypes.number.isRequired
};
// Returns an Viewport instance
getViewport() {}
// Returns a new state object for chaining
panStart() {}
pan() {}
panEnd() {}
rotateStart() {}
rotate() {}
rotateEnd() {}
zoomStart() {}
zoom() {}
zoomEnd() {}
}
These are built as trivial wrappers over the ES6 controllers. They just create a transparent div and pass it to the controller component which installs events using a DOM API. Almost all the logic go into the ES6 class, and the React component is rather small. These components render the dom element and translate DOM events (emitted from EventManager) to viewport events (transform calculated by ControllerState classes), and invoke user callbacks. This allows us to build components that can be easily extended for scenarios like:
export default class MercatorController {
static propTypes = {
controllerState: PropTypes.instanceOf(MercatorControllerState).isRequired,
/** event handling toggles, parity of Mapbox */
dragPanEnabled: PropTypes.bool,
dragRotateEnabled: PropTypes.bool,
scrollZoomEnabled: PropTypes.bool,
keyboardEnabled: PropTypes.bool,
doubleClickZoomEnabled: PropTypes.bool,
/**
* `onChangeViewport` callback is fired when the user interacted with the
* map. The object passed to the callback contains `latitude`,
* `longitude` and `zoom` and additional state information.
*/
onChangeViewport: PropTypes.func,
/**
* Is the component currently being dragged. This is used to show/hide the
* drag cursor. Also used as an optimization in some overlays by preventing
* rendering while dragging.
*/
isHovering: PropTypes.bool,
isDragging: PropTypes.bool
};
componentDidMount() {
// Register event handlers on the canvas using the EventManager helper class
this._eventManager = new EventManager(...);
}
_onDragStart(event) {
const newMapState = this.props.controllerState.panStart({pos}).zoomStart({pos});
this._updateViewport(newMapState);
}
_onDrag(event) {}
_onPinch(event) {}
_onWheel(event) {}
...
}
Most of these components would go into deck.gl/src/react. One would go into react-map-gl/src/components. They should be very compatible and based on the same luma.gl EventManager class.
In react-map-gl we have StaticMap which handles mapbox interaction events (still passing the click events to mapbox, even though we no longer pass in viewport events).
In deck.gl we have the DeckGL React component which handles the events but this makes it harder to do non-react integrations with deck.gl
Consider implementing the event handling in the LayerManager and have the React component pass in its canvas for event registration.
Base event handling registration - build on same click handlers as the Viewport Event Handling to make things simple?
Note: Ultimately we will need to document everything we discuss in this RFC so we might as well write down the proposal in a form that can be used as a user’s guide
recap of action items: