docs/content/concepts/query-and-transform/catalog-object-model.md
This page covers the Data Platform's object model. For logging and recording basics, see Recordings. For API details, see the Catalog SDK reference.
We refer to the contents stored in a given instance of the Data Platform as the catalog. The catalog contains top-level objects called entries.
There are currently two types of entries: tables and datasets. Each is described in more detail below.
Entries share a few common properties:
The id is immutable, but the name can be changed provided it remains unique.
Table entries model a single table of data. They use the Arrow data model, so a table is logically equivalent to an Arrow table. As a result, tables possess an Arrow schema.
Tables support the following mutation operations through the Catalog SDK:
Thanks to DataFusion, tables also support most database operations such as querying, filtering, joining, etc.
Dataset entries model a collection of Rerun data organized in episodes such as recorded runs of a given robotic task. These episodes within datasets are called segments, which are identified by a segment ID.
Segments are added to datasets by the process of registering a recording (typically stored in some object store such as S3) to the dataset using the Catalog SDK.
The recording ID of the .rrd file is used as its segment ID.
Recordings registered to a given segment are organized by layers, identified by a layer name.
By default, the "base" layer name is used.
Registering two .rrd files with the same recording ID (that is, with the same segment ID) to the same dataset, and using the same layer name, will result in the second .rrd overwriting the first.
Additive registration can be achieved by using different layer names for different .rrds with the same recording ID/segment ID.
direction: left
Catalog: {
shape: cylinder
my_dataset: {
label: "my_dataset"
segment_a: {
label: "segment_a"
base: {
label: "layer\n\"base\""
shape: parallelogram
}
}
segment_b: {
label: "segment_b"
base: {
label: "layer\n\"base\""
shape: parallelogram
}
annotations: {
label: "layer\n\"extra\""
shape: parallelogram
}
}
}
}
Object Store: {
shape: cylinder
"recording_a.rrd": {
shape: page
}
"recording_b.rrd": {
shape: page
}
"extra_b.rrd": {
shape: page
}
}
Object Store."recording_a.rrd" -> Catalog.my_dataset.segment_a.base
Object Store."recording_b.rrd" -> Catalog.my_dataset.segment_b.base
Object Store."extra_b.rrd" -> Catalog.my_dataset.segment_b.annotations
Layers are immutable and can only be overwritten by registering a new .rrd file. In other words, datasets support the following mutation operations:
.rrd with a "new" recording ID.rrd with a matching recording ID to a new layer name.rrd with a matching recording ID to an existing layer nameDatasets are based on the Rerun data model, which consists of a collection of chunks of Arrow data. These chunks hold data for various entities and components corresponding to various indexes (or timelines). A given collection of chunks, say, a dataset segment, defines an Arrow schema. We refer to this as schema-on-read, because the schema proceeds from the data, and not the other way around. This differs from the table model, where the schema is defined upfront (schema-on-write).
In this context, the schema of a dataset is the union of schemas of its segments, which themselves are the union of the schemas of their layers.
grid-rows: 3
grid-gap: 10
"my_dataset schema": { width: 400; style.fill: "${d2-config.theme-overrides.B5}" }
"segment_a schema".width: 200
"segment_b schema".width: 200
base1: "base\nschema" { width: 95; style.fill: "${d2-config.theme-overrides.N5}" }
extra: "extra\nschema" { width: 95; style.fill: "${d2-config.theme-overrides.N5}" }
base2: "base\nschema" { width: 200; style.fill: "${d2-config.theme-overrides.N5}" }
Datasets maintain a minimal level of schema self-consistency.
Registering a .rrd whose schema is incompatible with the current dataset schema will result in an error.
In this context, incompatible means that the schema of the new .rrd contains a column for the same entity, archetype, and component, but with a different Arrow type.
Such an occurrence is rare, and practically impossible when using standard Rerun archetypes.
A dataset can be assigned a blueprint.
This is done by registering a .rbl blueprint file typically stored in object storage to the dataset.
A dedicated API exists for this in the Catalog SDK: DatasetEntry.register_blueprint().
In that case, the blueprint is applied to all segments of the dataset when visualized in the Rerun Viewer.