dev-docs/RFCs/v7.2/64bit-attribute-rfc.md
Megarows/second.
We use this when we talk about attribute generation and rendering:
| Layer | Full regeneration | color regeneration | | LineLayer: | 10Mrows/second | 30Mrows/second | | ScatterplotLayer | 12Mrows/second | |
unit and iterations props to Probe.benchThis way we can get benchmark output specified directly in Mrows/s.
Or maybe we can just use a custom Bench formatter?
We are extending the JS accessor system to handle "bundles" of binary data (columns).
The specification of how the binary input columns map to the attribute arrays could be done in a couple of different ways:
getColor: [Column('intensity'), Column('intensity'), Column('intensity'), Constant(255)]
For custom cases (presumed "rare"), JS code could of course still be needed to transform the binary data.
It might be better to provide some general utilities for working with binary arrays rather than build it into the already highly complex attribute management system.
Provide support for default handling of 64-bit attributes in AttributeManager, to let us remove numerous duplicated instance generation functions
Add an fp64low flag to AttributeManager.add().
Add a new path to attribute.js that fp64lowParts the code.
_updateBufferViaStandardAccessor(data, props) {
const state = this.userData;
const {accessor} = state;
const {value, size, fp64low} = this;
const accessorFunc = props[accessor];
assert(typeof accessorFunc === 'function', `accessor "${accessor}" is not a function`);
let i = 0;
if (fp64low) {
for (const object of data) {
const objectValue = accessorFunc(object);
this._normalizeValue(objectValue, value, i);
this._fp64low(objectValue);
i += size;
}
} else {
for (const object of data) {
const objectValue = accessorFunc(object);
this._normalizeValue(objectValue, value, i);
i += size;
}
}
this.update({value});
}
By adding support for 64-bit attributes in Attribute Manager we can remove the majority of custom attribute generators
Layers that will no longer need custom attribute generators:
| Layer | Custom Attribute Generators | Replacement |
|---|---|---|
ArcLayer | calculateInstancePositions, calculateInstancePositions64Low | Rewrite to use instanceSourcePositions64, instanceTargetPositions64 |
BitmapLayer | calculatePositions, calculatePositions64xyLow | Non-standard input format? Rewrite? |
GridCellLayer | calculateInstancePositions64xyLow | Rewrite to use instancePositions64 |
HexagonCellLayer | calculateInstancePositions64xyLow | Rewrite to use instancePositions64 |
LineLayer | calculateInstancePositions64xyLow | Rewrite to use instancePositions64 |
PointCloudLayer | calculateInstancePositions64xyLow | Rewrite to use instancePositions64 |
ScatterplotLayer | calculateInstancePositions64xyLow | Rewrite to use instancePositions64 |
LineLayer | calculateInstancePositions64xyLow | Rewrite to use instancePositions64 |
Layers that still actually need custom attribute generators:
| Layer | Custom Attribute Generators | Comment |
|---|---|---|
MultiIconLayer | calculateInstanceOffsets | |
ScreenGridLayer | calculateInstancePositions, calculateInstanceCounts |
Tesselating layers (PathLayer and SolidPolygonLayer) have separate issues that are not addressed in this RFC.
To allow high precision coordinates to be efficiently handled in binary (columnar) input data, Float64Array should be a supported input type and should be a valid input to a 64 bit attribute:
The function that populates attributes
Float64Array values cannot be manipulated in shaders, the splitting into 32 bit components must be done in JS. Once that is done, the data becomes amenable to shader use, including transform feedback to preprocess accessor. So we want a highly optimized JS function to split 64-bit arrays
TODO - could webassembly improve performance?
// Function that splits 64 bit values into
function fp64ify-interleaved(float64Array, float32Array) {
float32Array = float32Array || new Float32Array(float64Array.buffer); //
for (let i = 0; i < float64Array.length) {
const value64 = float64Array
float32Array[i * 2] = value64;
float32Array[i * 2 + 1] = value64 - Math.fround(value64);
}
}
function fp64ify-split(float64Array, float32ArrayHigh, float32ArrayLow) {
for (let i = 0; i < float64Array.length) {
const value64 = float64Array
float32ArrayHigh[i] = value64;
float32ArrayLow[i] = value64 - Math.fround(value64);
}
}
The fastest case is the one where data is already available in sub-arrays, that can be copied to the binary attribute.
getPosition: row => row.position
By optimizing for this path we seem to be able to get numbers of 70Mrows/s which is of course nice for bragging rights.
However this RFC makes the assumption that this is not the primary case we want to optimize for:
longitude and latitude as top level columns, and have to combine those.We should probably choose a flat data structure to use for our reference benchmarks, or maybe show both numbers.
forEach in custom layer attribute generatorsBy having layer attribute calculation function use a forEach function instead of iterating directly, we can support more advanced cases such as chunked input data.
for (const chunk of data)
for (const row of chunk) {
}
}
The default could be a "zero-cost" outer iterator that just returns data.