docs/core/library-sync.mdx
Spacedrive synchronizes library metadata across all your devices using a leaderless peer-to-peer model. Every device is equal. No central server, no single point of failure.
Sync uses two protocols based on data ownership:
Device-owned data (locations, files): The owning device broadcasts changes in real-time and responds to pull requests for historical data. No conflicts possible since only the owner can modify.
Shared resources (tags, collections): Any device can modify. Changes are ordered using Hybrid Logical Clocks (HLC) to ensure consistency across all devices.
<Info> Library Sync handles metadata synchronization. For file content synchronization between storage locations, see [File Sync](/docs/core/file-sync). </Info>| Data Type | Ownership | Sync Method | Conflict Resolution |
|---|---|---|---|
| Devices | Shared | HLC-ordered log | Last write wins |
| Locations | Device-owned | State broadcast | None needed |
| Files/Folders | Device-owned | State broadcast | None needed |
| Volumes | Device-owned | State broadcast | None needed |
| Tags | Shared | HLC-ordered log | Per-model strategy |
| Collections | Shared | HLC-ordered log | Per-model strategy |
| User Metadata | Shared | HLC-ordered log | Per-model strategy |
| Spaces | Shared | HLC-ordered log | Per-model strategy |
| Media Metadata | Shared | HLC-ordered log | Per-model strategy |
| Content IDs | Shared | HLC-ordered log | Per-model strategy |
Spacedrive recognizes that some data naturally belongs to specific devices.
Only the device with physical access can modify:
/Users/alice/PhotosAny device can create or modify:
This ownership model eliminates most conflicts and simplifies synchronization.
The sync service runs as a background process with well-defined state transitions.
| State | Description |
|---|---|
Uninitialized | Device hasn't synced yet (no watermarks) |
Backfilling { peer, progress } | Receiving initial state from a peer (0-100%) |
CatchingUp { buffered_count } | Processing updates buffered during backfill |
Ready | Fully synced, applying real-time updates |
Paused | Sync disabled or device offline |
| From | Trigger | To |
|---|---|---|
Uninitialized | Peer becomes available | Backfilling |
Uninitialized | Already has data | Ready |
Backfilling | Transfer complete | CatchingUp |
Backfilling | Peer disconnected | Save checkpoint, select new peer |
CatchingUp | Buffer empty | Ready |
CatchingUp | 5 consecutive failures | Uninitialized (escalate to full backfill) |
Ready | Device goes offline | Paused |
Ready | Watermarks stale | CatchingUp |
Paused | Device comes online | Ready or CatchingUp |
During backfill, incoming real-time updates are buffered to prevent data loss:
If incremental catch-up fails repeatedly, the system uses exponential backoff before escalating to a full backfill.
| Attempt | Delay | Action |
|---|---|---|
| 1 | 10s | Retry |
| 2 | 20s | Retry |
| 3 | 40s | Retry |
| 4 | 80s | Retry |
| 5 | 160s (capped) | Retry |
| 6+ | - | Reset to Uninitialized, trigger full backfill |
This prevents permanent sync failures from transient network issues.
State-based sync uses two mechanisms depending on the scenario:
Real-time broadcast: When Device A creates or modifies a location, it sends a StateChange message via unidirectional stream to all connected peers. Peers apply the update immediately.
Pull-based backfill: When Device B is new or reconnecting after being offline, it sends a StateRequest to Device A. Device A responds with a StateResponse containing records in configurable batches. This request/response pattern uses bidirectional streams.
For large datasets, pagination automatically handles multiple batches using cursor-based checkpoints. The StateRequest includes both watermark and cursor:
StateRequest {
model_types: ["location", "entry"],
since: Some(last_state_watermark), // Only records newer than this
checkpoint: Some("2025-10-21T19:10:00.456Z|uuid"), // Resume cursor
batch_size: config.batching.backfill_batch_size,
}
No version tracking needed. The owner's state is always authoritative.
Shared resources sync through an ordered log. When you create a tag, the device inserts it locally, generates an HLC timestamp, appends to the sync log, and broadcasts a SharedChange message. Receiving devices apply changes in HLC order, then acknowledge receipt. Once all peers acknowledge, the entry is pruned from the log.
Batch operations work the same way but send a single SharedChangeBatch message containing multiple entries. This reduces network overhead during bulk imports while preserving HLC ordering for each individual record.
For large datasets, the system uses HLC-based pagination. Each request includes the last seen HLC, and the peer responds with the next batch. This scales to millions of shared resources.
HLCs provide global ordering without synchronized clocks:
pub struct HLC {
/// Physical time component (milliseconds since Unix epoch)
pub timestamp: u64,
/// Logical counter for events within the same millisecond
pub counter: u64,
/// Device that generated this HLC (for deterministic ordering)
pub device_id: Uuid,
}
The HLC string format for storage and comparison is {timestamp:016x}-{counter:016x}-{device_id}, which is lexicographically sortable.
Properties:
When generating or receiving an HLC, the system maintains causality:
fn generate(last: Option<HLC>, device_id: Uuid) -> HLC {
let physical = now_millis();
let (timestamp, counter) = match last {
Some(prev) if prev.timestamp >= physical => {
// Clock hasn't advanced, increment counter
(prev.timestamp, prev.counter + 1)
}
Some(prev) => {
// Clock advanced, reset counter
(physical, 0)
}
None => (physical, 0),
};
HLC { timestamp, counter, device_id }
}
fn update(&mut self, received: HLC) {
let physical = now_millis();
let max_ts = max(self.timestamp, max(received.timestamp, physical));
self.counter = if max_ts == self.timestamp && max_ts == received.timestamp {
max(self.counter, received.counter) + 1
} else if max_ts == self.timestamp {
self.counter + 1
} else if max_ts == received.timestamp {
received.counter + 1
} else {
0 // Physical time advanced
};
self.timestamp = max_ts;
}
This ensures:
Each shared model implements its own apply_shared_change() method, allowing per-model conflict resolution strategies. The Syncable trait provides this flexibility.
Default behavior (most models): Last Write Wins based on HLC ordering. When two devices concurrently modify the same record, the change with the higher HLC is applied:
Device A updates tag with HLC(timestamp_a, 0, device-a)
Device B updates same tag with HLC(timestamp_b, 0, device-b)
If timestamp_b > timestamp_a: Device B's version wins
If timestamps equal: Higher device_id breaks the tie (deterministic)
Creation conflicts: When two devices create resources with the same logical identity (e.g., same tag name) but different UUIDs, both resources coexist. This is an implicit union merge - no data is lost.
Device A creates tag "Vacation" with UUID-A
Device B creates tag "Vacation" with UUID-B
After sync: Both tags exist (different UUIDs, same name)
Tags can be disambiguated by namespace or merged by user
Custom strategies: Models can override apply_shared_change() to implement:
The sync system checks the peer log before applying changes to ensure only newer updates are applied.
Contains all library data from all devices:
-- Device-owned tables
CREATE TABLE devices (
id INTEGER PRIMARY KEY,
uuid TEXT UNIQUE,
name TEXT,
slug TEXT
);
CREATE TABLE volumes (
id INTEGER PRIMARY KEY,
uuid TEXT UNIQUE,
device_id INTEGER, -- Owner device
fingerprint TEXT, -- Stable cross-mount identifier
FOREIGN KEY (device_id) REFERENCES devices(id)
);
CREATE TABLE locations (
id INTEGER PRIMARY KEY,
uuid TEXT UNIQUE,
volume_id INTEGER, -- Ownership via volume
entry_id INTEGER, -- Root entry
name TEXT,
FOREIGN KEY (volume_id) REFERENCES volumes(id)
);
CREATE TABLE entries (
id INTEGER PRIMARY KEY,
uuid TEXT UNIQUE,
volume_id INTEGER, -- Ownership via volume
parent_id INTEGER,
name TEXT,
kind INTEGER,
size_bytes INTEGER,
FOREIGN KEY (volume_id) REFERENCES volumes(id)
);
-- Shared resource tables
CREATE TABLE tags (
id INTEGER PRIMARY KEY,
uuid TEXT UNIQUE,
canonical_name TEXT
);
Ownership flows through volumes: Device → Volume → Location/Entry. This indirection enables portable storage to transfer between devices with a single volume update.
Contains pending changes for shared resources and sync coordination data:
-- Shared resource changes pending acknowledgment
CREATE TABLE shared_changes (
hlc TEXT PRIMARY KEY,
model_type TEXT NOT NULL,
record_uuid TEXT NOT NULL,
change_type TEXT NOT NULL, -- insert/update/delete
data TEXT NOT NULL, -- JSON payload
created_at TEXT NOT NULL -- When this change was logged
);
-- Peer acknowledgment tracking (outgoing - for pruning our log)
-- Tracks which of our changes each peer has acknowledged receiving
CREATE TABLE peer_acks (
peer_device_id TEXT PRIMARY KEY,
last_acked_hlc TEXT NOT NULL,
acked_at TEXT NOT NULL
);
-- Per-resource watermarks for device-owned incremental sync
CREATE TABLE device_resource_watermarks (
device_uuid TEXT NOT NULL,
peer_device_uuid TEXT NOT NULL,
resource_type TEXT NOT NULL, -- "location", "entry", "volume", etc.
last_watermark TEXT NOT NULL, -- RFC3339 timestamp
updated_at TEXT NOT NULL,
PRIMARY KEY (device_uuid, peer_device_uuid, resource_type)
);
-- Per-peer watermarks for shared resource incremental sync (incoming)
-- Tracks the maximum HLC we've received from each peer
CREATE TABLE peer_received_watermarks (
device_uuid TEXT NOT NULL,
peer_device_uuid TEXT NOT NULL,
max_received_hlc TEXT NOT NULL, -- Maximum HLC received from this peer
updated_at TEXT NOT NULL,
PRIMARY KEY (device_uuid, peer_device_uuid)
);
-- Resumable backfill checkpoints
CREATE TABLE backfill_checkpoints (
id INTEGER PRIMARY KEY,
peer_device_uuid TEXT NOT NULL,
model_type TEXT NOT NULL,
resume_token TEXT, -- timestamp|uuid cursor
progress REAL, -- 0.0 to 1.0
completed_models TEXT, -- JSON array of completed model types
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);
The sync API handles all complexity internally. Three methods cover all use cases:
// 1. Simple models without FK relationships (shared resources)
// Use sync_model() - no DB connection needed
let tag = tag::ActiveModel { ... }.insert(db).await?;
library.sync_model(&tag, ChangeType::Insert).await?;
// 2. Models with FK relationships (needs UUID lookup)
// Use sync_model_with_db() - requires DB connection for FK conversion
let location = location::ActiveModel { ... }.insert(db).await?;
library.sync_model_with_db(&location, ChangeType::Insert, db.conn()).await?;
// 3. Bulk operations (1000+ records)
// Use sync_models_batch() - batches FK lookups and network broadcasts
let entries: Vec<entry::Model> = bulk_insert_entries(db).await?;
library.sync_models_batch(&entries, ChangeType::Insert, db.conn()).await?;
The API automatically:
To make a model syncable, implement the Syncable trait and register it with a macro:
impl Syncable for YourModel {
/// Stable model identifier used in sync logs (must never change)
const SYNC_MODEL: &'static str = "your_model";
/// Get the globally unique ID for this resource
fn sync_id(&self) -> Uuid {
self.uuid
}
/// Version number for optimistic concurrency control
fn version(&self) -> i64 {
self.version
}
/// Fields to exclude from sync (platform-specific data)
fn exclude_fields() -> Option<&'static [&'static str]> {
Some(&["id", "created_at", "updated_at"])
}
/// Declare sync dependencies on other models
fn sync_depends_on() -> &'static [&'static str] {
&["parent_model"] // Models that must sync first
}
/// Declare foreign key mappings for automatic UUID conversion
fn foreign_key_mappings() -> Vec<FKMapping> {
vec![
FKMapping::new("device_id", "devices"),
FKMapping::new("parent_id", "your_models"),
]
}
}
// Register with sync system - choose based on ownership model:
// For shared resources (any device can modify):
crate::register_syncable_shared!(Model, "your_model", "your_table");
// For shared resources with closure table rebuild after backfill:
crate::register_syncable_shared!(Model, "tag_relationship", "tag_relationship", with_rebuild);
// For device-owned data:
crate::register_syncable_device_owned!(Model, "your_model", "your_table");
// With deletion support:
crate::register_syncable_device_owned!(Model, "your_model", "your_table", with_deletion);
// With deletion + post-backfill rebuild (for models with closure tables):
crate::register_syncable_device_owned!(Model, "entry", "entries", with_deletion, with_rebuild);
The with_rebuild flag triggers post_backfill_rebuild() after backfill completes, which rebuilds derived tables like entry_closure or tag_closure from the synced base data.
The registration macros use the inventory crate for automatic discovery at startup - no manual registry initialization needed.
Shared models can implement custom conflict resolution by overriding apply_shared_change():
impl Syncable for YourModel {
// ... other trait methods ...
async fn apply_shared_change(
entry: SharedChangeEntry,
db: &DatabaseConnection,
) -> Result<(), sea_orm::DbErr> {
match entry.change_type {
ChangeType::Insert | ChangeType::Update => {
// Option 1: Default LWW - just upsert
let active = deserialize_to_active_model(&entry.data)?;
Entity::insert(active)
.on_conflict(/* upsert on uuid */)
.exec(db).await?;
// Option 2: Field-level merge
if let Some(existing) = Entity::find_by_uuid(uuid).one(db).await? {
let merged = merge_fields(existing, incoming, entry.hlc);
merged.update(db).await?;
}
// Option 3: Domain-specific rules
// e.g., keep longer description, union tags, etc.
}
ChangeType::Delete => {
Entity::delete_by_uuid(uuid).exec(db).await?;
}
}
Ok(())
}
}
Currently, all models use the default LWW strategy. Custom strategies can be added per-model as needed without changes to the sync infrastructure.
To prevent foreign key violations, the sync system must process models in a specific order (e.g., Device records must exist before the Location records that depend on them). Spacedrive determines this order automatically at startup using a deterministic algorithm.
The process works as follows:
Dependency Declaration: Each syncable model declares its parent models using the sync_depends_on() function. This creates a dependency graph where an edge from Entry to Volume means Entry depends on Volume.
Topological Sort: The SyncRegistry takes the full list of models and their dependencies and performs a topological sort using Kahn's algorithm. This algorithm produces a linear ordering of the models where every parent model comes before its children. It also detects circular dependencies (e.g., A depends on B, and B depends on A).
Ordered Execution: The BackfillManager receives this ordered list (e.g., ["device", "volume", "tag", "location", "entry"]) and uses it to sync data in the correct sequence, guaranteeing that no foreign key violations can occur.
The sync system respects model dependencies and enforces ordering:
| Order | Models | Reason |
|---|---|---|
| 1 | Shared resources (tags, collections, content_identities) | No dependencies, entries reference these |
| 2 | Devices | Root of ownership chain |
| 3 | Volumes | Needs devices |
| 4 | Locations | Needs volumes |
| 5 | Entries | Needs volumes and content_identities |
Shared resources sync first because entries reference content identities via foreign key. This prevents NULL foreign key references during backfill.
The sync system must ensure that relationships between models are preserved across devices. Since each device uses local, auto-incrementing integer IDs for performance, these IDs cannot be used for cross-device references.
This is where foreign key translation comes in, a process orchestrated by the foreign_key_mappings() function on the Syncable trait.
The Process:
Outgoing: When a record is being prepared for sync, the system uses the foreign_key_mappings() definition to find all integer foreign key fields (e.g., parent_id: 42). It looks up the corresponding UUID for each of these IDs in the local database and sends the UUIDs over the network (e.g., parent_uuid: "abc-123...").
Incoming: When a device receives a record, it does the reverse. It uses foreign_key_mappings() to identify the incoming UUID foreign keys, looks up the corresponding local integer ID for each UUID, and replaces them before inserting the record into its own database (e.g., parent_uuid: "abc-123..." → parent_id: 15).
This entire translation process is automatic and transparent.
Batch FK Optimization: For bulk operations (backfill, batch sync), the system uses batch_map_sync_json_to_local() which reduces database queries from N×M (N records × M FKs) to just M (one query per FK type). For 1000 records with 3 FK fields each, this is a 365x reduction in queries.
// Before: 3000 queries for 1000 records with 3 FKs each
// After: 3 queries total (one per FK type)
let result = batch_map_sync_json_to_local(records, fk_mappings, db).await?;
// Records with missing FK references are returned separately for retry
for (record, fk_field, missing_uuid) in result.failed {
// Buffer for retry when dependency arrives
}
During backfill, records may arrive before their FK dependencies (e.g., an entry before its parent folder). The DependencyTracker handles this efficiently:
// Record fails FK resolution - parent doesn't exist yet
let error = "Foreign key lookup failed: parent_uuid abc-123 not found";
let missing_uuid = extract_missing_dependency_uuid(&error);
// Track the waiting record
dependency_tracker.add_dependency(missing_uuid, buffered_update);
// Later, when parent record arrives and is applied...
let waiting = dependency_tracker.resolve(parent_uuid);
for update in waiting {
// Retry applying - FK should resolve now
apply_update(update).await?;
}
This provides O(n) targeted retry instead of O(n²) "retry entire buffer" approaches:
| Approach | Records | FKs | Retries | Complexity |
|---|---|---|---|---|
| Retry all | 10,000 | 3 | 10,000 × 10,000 | O(n²) |
| Dependency tracking | 10,000 | 3 | ~100 targeted | O(n) |
The tracker maintains a map of missing_uuid → Vec<waiting_updates>. When a record is successfully applied, its UUID is checked against the tracker to resolve any waiting dependents.
Spacedrive does not require a direct connection between all devices to keep them in sync. Changes can propagate transitively through intermediaries, ensuring the entire library eventually reaches a consistent state.
This is made possible by two core architectural principles:
Complete State Replication: Every device maintains a full and independent copy of the entire library's shared state (like tags, collections, etc.). When Device A syncs a new tag to Device B, that tag becomes a permanent part of Device B's database, not just a temporary message.
State-Based Backfill: When a new or offline device (Device C) connects to any peer in the library (Device B), it initiates a backfill process. As part of this process, Device C requests the complete current state of all shared resources from Device B.
How it Works in Practice:
<Steps> <Step title="1. Device A syncs to B"> Device A creates a new tag. It connects to Device B and syncs the tag. The tag is now stored in the database on both A and B. Device A then goes offline. </Step> <Step title="2. Device C connects to B"> Device C comes online and connects only to Device B. It has never communicated with Device A. </Step> <Step title="3. Device C Backfills from B"> Device C requests the complete state of all shared resources from Device B. Since Device B has a full copy of the library state (including the tag from Device A), it sends that tag to Device C. </Step> <Step title="4. Library is Consistent"> Device C now has the tag created by Device A, even though they never connected directly. The change has propagated transitively. </Step> </Steps>This architecture provides significant redundancy and resilience, as the library can stay in sync as long as there is any path of connectivity between peers.
When starting a backfill, the system scores available peers to select the best source:
fn score(&self) -> i32 {
let mut score = 0;
// Prefer online peers
if self.is_online { score += 100; }
// Prefer peers with complete state
if self.has_complete_state { score += 50; }
// Prefer low latency (measured RTT)
score -= (self.latency_ms / 10) as i32;
// Prefer less busy peers
score -= (self.active_syncs * 10) as i32;
score
}
Peers are sorted by score (highest first). The best peer is selected for backfill. If that peer disconnects, the checkpoint is saved and a new peer is selected.
System-provided resources use deterministic UUIDs (v5 namespace hashing) so they're identical across all devices:
// System tags have consistent UUIDs everywhere
let system_tag_uuid = deterministic_system_tag_uuid("system");
// Always: 550e8400-e29b-41d4-a716-446655440000 (example)
// Library-scoped defaults
let default_uuid = deterministic_library_default_uuid(library_id, "default_collection");
Use deterministic UUIDs for:
Use random UUIDs for:
This prevents creation conflicts for system resources while allowing polymorphic naming for user content.
Device-owned deletions use tombstones that sync via StateResponse. When you delete a location or folder with thousands of files, only the root UUID is tombstoned. Receiving devices cascade the deletion through their local tree automatically.
Shared resource deletions use HLC-ordered log entries with ChangeType::Delete. All devices process deletions in the same order for consistency.
Pruning: Both deletion mechanisms use acknowledgment-based pruning. Tombstones and peer log entries are removed after all devices have synced past them. A 7-day safety limit prevents offline devices from blocking pruning indefinitely.
The system tracks deletions in a device_state_tombstones table. Each tombstone contains just the root UUID of what was deleted. When syncing entries for a device, the StateResponse includes both updated records and a list of deleted UUIDs since your last sync.
StateResponse {
records: [...], // New and updated entries
deleted_uuids: [uuid1], // Root UUID only (cascade handles children)
}
Receiving devices look up each deleted UUID and call the same deletion logic used locally. For entries, this triggers delete_subtree() which removes all descendants via the entry_closure table. A folder with thousands of files requires only one tombstone and one network message.
Race condition protection: Models check tombstones before applying state changes during backfill. If a deletion arrives before the record itself, the system skips creating it. For entries, the system also checks if the parent is tombstoned to prevent orphaned children.
Data created before enabling sync is included during backfill. When the peer log has been pruned or contains fewer items than expected, the response includes a current state snapshot:
SharedChangeResponse {
entries: [...], // Recent changes from peer log
current_state: {
tags: [...], // Complete snapshot
content_identities: [...],
collections: [...],
},
has_more: bool, // True if snapshot exceeds batch limit
}
The receiving device applies both the incremental changes and the current state snapshot, ensuring all shared resources sync correctly even if created before sync was enabled.
When devices reconnect after being offline, they use watermarks to avoid full re-sync.
Per-Resource Watermarks: Each resource type (location, entry, volume) tracks its own timestamp watermark per peer device. This prevents watermark advancement in one resource from filtering out records in another resource with earlier timestamps.
The device_resource_watermarks table in sync.db tracks:
This allows independent sync progress: if entries sync to timestamp T1 but locations only sync to T0, each resource type resumes from its own watermark rather than a global one.
Watermark Advancement: Watermarks only advance when data is actually received. This invariant prevents a subtle data loss bug: if a catch-up request returns empty (peer has no new data), advancing the watermark anyway would permanently filter out any records that should have been returned. The system tracks the maximum timestamp from received records and uses that for the watermark update.
Shared Watermark: HLC of the last shared resource change seen. Used for incremental sync of tags, collections, and other shared resources.
Stale Watermark Handling: If a watermark is older than force_full_sync_threshold_days (default 25 days), the system forces a full sync instead of incremental catch-up. This ensures consistency when tombstones for deletions may have been pruned.
During catch-up, the device sends a StateRequest with the since parameter set to its watermark. The peer responds with only records modified after that timestamp. This is a pull request, not a broadcast.
Example flow when Device B reconnects:
1. Device B checks entry watermark for Device A: 2025-10-20 14:30:00
2. Device B sends StateRequest(model_types: ["entry"], since: 2025-10-20 14:30:00) to Device A
3. Device A queries: SELECT * FROM entries WHERE updated_at >= '2025-10-20 14:30:00'
4. Device A responds with StateResponse containing 3 new entries
5. Device B applies changes and updates entry watermark for Device A
This syncs only changed records instead of re-syncing the entire dataset.
Both device-owned and shared resources use cursor-based pagination for large datasets. Batch size is configurable via SyncConfig.
Device-owned pagination uses a timestamp|uuid cursor format:
checkpoint: "2025-10-21T19:10:00.456Z|abc-123-uuid"
Query logic handles identical timestamps from batch inserts:
WHERE (updated_at > cursor_timestamp)
OR (updated_at = cursor_timestamp AND uuid > cursor_uuid)
ORDER BY updated_at, uuid
LIMIT {configured_batch_size}
Shared resource pagination uses HLC cursors:
SharedChangeRequest {
since_hlc: Some(last_hlc), // Resume from this HLC
limit: config.batching.backfill_batch_size,
}
The peer log query returns the next batch starting after the provided HLC, maintaining total ordering.
Both pagination strategies ensure all records are fetched exactly once, no records are skipped even with identical timestamps, and backfill is resumable from checkpoint if interrupted.
The sync protocol uses JSON-serialized messages over Iroh/QUIC streams:
| Message | Direction | Purpose |
|---|---|---|
StateChange | Broadcast | Single device-owned record update |
StateBatch | Broadcast | Batch of device-owned records |
StateRequest | Request | Pull device-owned data from peer |
StateResponse | Response | Device-owned data with tombstones |
SharedChange | Broadcast | Single shared resource update (HLC) |
SharedChangeBatch | Broadcast | Batch of shared resource updates |
SharedChangeRequest | Request | Pull shared changes since HLC |
SharedChangeResponse | Response | Shared changes + state snapshot |
AckSharedChanges | Broadcast | Acknowledge receipt (enables pruning) |
Heartbeat | Broadcast | Peer status with watermarks |
WatermarkExchangeRequest | Request | Request peer's sync progress |
WatermarkExchangeResponse | Response | Peer's watermarks for catch-up |
Error | Response | Error message |
// Device-owned state change
StateChange {
library_id: Uuid,
model_type: String, // "location", "entry", etc.
record_uuid: Uuid,
device_id: Uuid, // Owner device
data: serde_json::Value, // Record as JSON
timestamp: DateTime<Utc>,
}
// Batch of device-owned changes
StateBatch {
library_id: Uuid,
model_type: String,
device_id: Uuid,
records: Vec<StateRecord>, // [{uuid, data, timestamp}, ...]
}
// Request device-owned state
StateRequest {
library_id: Uuid,
model_types: Vec<String>,
device_id: Option<Uuid>, // Specific device or all
since: Option<DateTime>, // Incremental sync
checkpoint: Option<String>, // Resume cursor
batch_size: usize,
}
// Response with device-owned state
StateResponse {
library_id: Uuid,
model_type: String,
device_id: Uuid,
records: Vec<StateRecord>,
deleted_uuids: Vec<Uuid>, // Tombstones
checkpoint: Option<String>, // Next page cursor
has_more: bool,
}
// Shared resource change (HLC-ordered)
SharedChange {
library_id: Uuid,
entry: SharedChangeEntry,
}
SharedChangeEntry {
hlc: HLC, // Ordering key
model_type: String,
record_uuid: Uuid,
change_type: ChangeType, // Insert, Update, Delete
data: serde_json::Value,
}
// Heartbeat with sync progress
Heartbeat {
library_id: Uuid,
device_id: Uuid,
timestamp: DateTime<Utc>,
state_watermark: Option<DateTime>, // Last state sync
shared_watermark: Option<HLC>, // Last shared change
}
// Watermark exchange for reconnection
WatermarkExchangeRequest {
library_id: Uuid,
device_id: Uuid,
my_state_watermark: Option<DateTime>,
my_shared_watermark: Option<HLC>,
}
WatermarkExchangeResponse {
library_id: Uuid,
device_id: Uuid,
state_watermark: Option<DateTime>,
shared_watermark: Option<HLC>,
needs_state_catchup: bool,
needs_shared_catchup: bool,
}
The sync system uses the Iroh networking layer as the source of truth for device connectivity. When checking if a peer is online, the system queries Iroh's active connections directly rather than relying on cached state.
A background monitor updates the devices table at configured intervals for UI purposes:
UPDATE devices SET
is_online = true,
last_seen_at = NOW()
WHERE uuid = 'peer-device-id';
All sync decisions use real-time Iroh connectivity checks, ensuring messages only send to reachable peers.
Some data is computed locally and never syncs:
These rebuild automatically from synced base data.
Failed sync messages are automatically retried with exponential backoff:
| Attempt | Delay | Action |
|---|---|---|
| 1 | 5s | First retry |
| 2 | 10s | Second retry |
| 3 | 20s | Third retry |
| 4 | 40s | Fourth retry |
| 5 | 80s | Final retry |
| 6+ | - | Message dropped |
1. Broadcast fails (peer unreachable, timeout, etc.)
2. Message queued with next_retry = now + 5s
3. Background task checks queue every sync_loop_interval
4. Ready messages retried in order
5. Success: remove from queue
6. Failure: re-queue with doubled delay
7. After 5 attempts: drop and log warning
retry_queue_depth tracks current queue sizeThe retry queue handles transient network failures without blocking real-time sync. Permanent failures eventually resolve via watermark-based catch-up when the peer reconnects.
A key feature of Spacedrive is the ability to move external drives between devices without losing track of the data. This is handled through a special sync process that allows the "ownership" of a Location to change.
When you move a volume from one device to another, the Location associated with that volume must be assigned a new owner. This process is designed to be extremely efficient, avoiding the need for costly re-indexing or bulk data updates.
It is handled using a Hybrid Ownership Sync model:
<Steps> <Step title="Ownership Change is Requested"> When a device detects a known volume that it does not own, it broadcasts a special `RequestLocationOwnership` event. Unlike normal device-owned data, this event is sent to the HLC-ordered log, treating it like a shared resource update. </Step> <Step title="Peers Process the Change"> Every device in the library processes this event in the same, deterministic order. Upon processing, each peer performs a single, atomic update on its local database: `UPDATE locations SET device_id = 'new_owner_id' WHERE uuid = 'location_uuid'` </Step> <Step title="Ownership is Transferred Instantly"> This single-row update is all that is required. Because an `Entry`'s ownership is inherited from its parent `Location` at runtime, this change instantly transfers ownership of millions of files. No bulk updates are needed on the `entries` or `directory_paths` tables. The new owner then takes over state-based sync for that `Location`. </Step> </Steps>A simpler scenario is when a volume's mount point changes on the same device (e.g., from D:\ to E:\ on Windows).
path field on its Location record.directory_paths table to replace the old path prefix with the new one (e.g., REPLACE(path, 'D:\', 'E:\')).entries table, which is the largest, is completely untouched. This makes the operation much faster than a full re-index.| Aspect | Device-Owned | Shared Resources |
|---|---|---|
| Storage | No log | Small peer log |
| Conflicts | Impossible | HLC-resolved |
| Offline | Queues state changes | Queues to peer log |
Batching: The sync system batches both device-owned and shared resource operations. Batch sizes are configurable via SyncConfig.
Device-owned data syncs in batches during file indexing. One StateBatch message replaces many individual StateChange messages, providing significant performance improvement.
Shared resources send batch messages instead of individual changes. For example, linking thousands of files to content identities during indexing sends a small number of network messages instead of one per file, providing substantial reduction in network traffic.
Both batch types still write individual entries to the sync log for proper HLC ordering and conflict resolution. The optimization is purely in network broadcast efficiency.
Pruning: The sync log automatically removes entries after all peers acknowledge receipt, keeping the sync database under 1MB.
Compression: Network messages use compression to reduce bandwidth usage.
Caching: Backfill responses cache for 15 minutes to improve performance when multiple devices join simultaneously.
Check:
Debug commands:
# Check pending changes
sqlite3 sync.db "SELECT COUNT(*) FROM shared_changes"
# Verify peer connections
sd sync status
# Monitor sync activity
RUST_LOG=sd_core::sync=debug cargo run
Large sync.db: Peers not acknowledging. Check network connectivity.
Missing data: Verify dependency order. Parents must sync before children.
Conflicts: Check HLC implementation maintains ordering.
The sync system defines specific error types for different failure modes:
/// HLC parsing failures
HLCError::ParseError(String)
/// Peer log database errors
PeerLogError {
ConnectionError(String), // Can't open sync.db
QueryError(String), // SQL query failed
SerializationError(String), // JSON encode/decode failed
ParseError(String), // Invalid data format
}
/// Watermark tracking errors
WatermarkError {
QueryError(String),
ParseError(String),
}
/// Checkpoint persistence errors
CheckpointError {
QueryError(String),
ParseError(String),
}
ApplyError {
UnknownModel(String), // Model not registered
MissingFkLookup(String), // FK mapper not configured
WrongSyncType { model, expected, got }, // Device-owned vs shared mismatch
MissingApplyFunction(String), // No apply handler
MissingQueryFunction(String), // No query handler
MissingDeletionHandler(String), // No deletion handler
DatabaseError(String), // DB operation failed
}
DependencyError {
CircularDependency(String), // A → B → A detected
UnknownDependency(String, String), // Depends on unregistered model
NoModels, // Empty registry
}
TxError {
Database(DbErr), // SeaORM error
SyncLog(String), // Peer log write failed
Serialization(serde_json::Error), // JSON error
InvalidModel(String), // Model validation failed
}
All errors implement std::error::Error and include context for debugging.
The sync system collects metrics for monitoring and debugging.
State Metrics:
current_state - Current sync state (Uninitialized, Backfilling, etc.)state_entered_at - When current state startedstate_history - Recent state transitions (ring buffer)total_time_in_state - Cumulative time per statetransition_count - Number of state transitionsOperation Metrics:
broadcasts_sent - Total broadcast messages sentstate_changes_broadcast - Device-owned changes broadcastshared_changes_broadcast - Shared resource changes broadcastchanges_received - Updates received from peerschanges_applied - Successfully applied updateschanges_rejected - Updates rejected (conflict, error)active_backfill_sessions - Concurrent backfills in progressretry_queue_depth - Messages waiting for retryData Volume Metrics:
entries_synced - Records synced per model typeentries_by_device - Records synced per peer devicebytes_sent / bytes_received - Network bandwidthlast_sync_per_peer - Last sync timestamp per devicelast_sync_per_model - Last sync timestamp per modelPerformance Metrics:
broadcast_latency - Time to broadcast to all peers (histogram)apply_latency - Time to apply received changes (histogram)backfill_request_latency - Backfill round-trip time (histogram)peer_rtt_ms - Per-peer round-trip timewatermark_lag_ms - How far behind each peer ishlc_physical_drift_ms - Clock drift detected via HLChlc_counter_max - Highest logical counter seenError Metrics:
total_errors - Total error countnetwork_errors - Connection/timeout failuresdatabase_errors - DB operation failuresapply_errors - Change application failuresvalidation_errors - Invalid data receivedrecent_errors - Last N errors with detailsconflicts_detected - Concurrent modification conflictsconflicts_resolved_by_hlc - Conflicts resolved via HLCPerformance metrics use histograms with atomic min/max/avg tracking:
HistogramMetric {
count: AtomicU64, // Number of samples
sum: AtomicU64, // Sum for average
min: AtomicU64, // Minimum value
max: AtomicU64, // Maximum value
}
// Methods
histogram.avg() // Average latency
histogram.min() // Best case
histogram.max() // Worst case
histogram.count() // Sample count
Metrics can be captured as point-in-time snapshots:
let snapshot = sync_service.metrics().snapshot().await;
// Filter by time range
let recent = snapshot.filter_since(one_hour_ago);
// Filter by peer
let alice_metrics = snapshot.filter_by_peer(alice_device_id);
// Filter by model
let entry_metrics = snapshot.filter_by_model("entry");
A ring buffer stores recent snapshots for time-series analysis:
MetricsHistory {
capacity: 1000, // Max snapshots retained
snapshots: VecDeque<SyncMetricsSnapshot>,
}
// Query methods
history.get_snapshots_since(timestamp)
history.get_snapshots_range(start, end)
history.get_latest_snapshot()
Metrics are persisted to the database every 5 minutes (configurable via metrics_log_interval_secs). This enables post-mortem analysis of sync issues.
The sync system uses a dedicated event bus separate from the general application event bus:
The general EventBus handles high-volume events (filesystem changes, job progress, UI updates). During heavy indexing, thousands of events per second can queue up.
The SyncEventBus is isolated to prevent sync events from being starved:
enum SyncEvent {
// Device-owned state change ready to broadcast
StateChange {
library_id: Uuid,
model_type: String,
record_uuid: Uuid,
device_id: Uuid,
data: serde_json::Value,
timestamp: DateTime<Utc>,
},
// Shared resource change ready to broadcast
SharedChange {
library_id: Uuid,
entry: SharedChangeEntry,
},
// Metrics snapshot available
MetricsUpdated {
library_id: Uuid,
metrics: SyncMetricsSnapshot,
},
}
| Event | Critical | Can Drop |
|---|---|---|
StateChange | Yes | No |
SharedChange | Yes | No |
MetricsUpdated | No | Yes |
Critical events trigger warnings if the bus lags. Non-critical events are silently dropped under load.
The event listener batches events before broadcasting:
1. Event arrives on SyncEventBus
2. Add to batch buffer
3. If buffer.len() >= 100 OR 50ms elapsed:
4. Flush batch as single network message
5. Reset buffer and timer
This reduces network overhead during rapid operations (e.g., bulk tagging).
<Info>See core/tests/sync_backfill_test.rs, core/tests/sync_realtime_test.rs, and core/tests/sync_metrics_test.rs for the test suite.</Info>
sync_model, sync_model_with_db, sync_models_batch)inventory-based registrationDevice-Owned Models (3):
| Model | Table | Dependencies | FK Mappings | Features |
|---|---|---|---|---|
| Volume | volumes | device | device_id → devices | with_deletion |
| Location | locations | volume | volume_id → volumes, entry_id → entries | with_deletion |
| Entry | entries | volume, content_identity, user_metadata | volume_id → volumes, parent_id → entries, metadata_id → user_metadata, content_id → content_identities | with_deletion, with_rebuild |
Shared Models (16):
| Model | Table | Dependencies | FK Mappings | Features |
|---|---|---|---|---|
| Device | devices | None | None | Library membership |
| Tag | tag | None | None | - |
| TagRelationship | tag_relationship | tag | parent_tag_id → tag, child_tag_id → tag | with_rebuild |
| Collection | collection | None | None | - |
| CollectionEntry | collection_entry | collection, entry | collection_id → collection, entry_id → entries | - |
| ContentIdentity | content_identities | None | None | Deterministic UUID |
| UserMetadata | user_metadata | None | None | - |
| UserMetadataTag | user_metadata_tag | user_metadata, tag | user_metadata_id → user_metadata, tag_id → tag, device_uuid → devices | - |
| AuditLog | audit_log | None | None | - |
| Sidecar | sidecar | content_identity | content_uuid → content_identities | - |
| Space | spaces | None | None | - |
| SpaceGroup | space_groups | space | space_id → spaces | - |
| SpaceItem | space_items | space, space_group | space_id → spaces, group_id → space_groups | - |
| VideoMediaData | video_media_data | None | None | - |
| AudioMediaData | audio_media_data | None | None | - |
| ImageMediaData | image_media_data | None | None | - |
Each model excludes certain fields from sync (local-only data):
| Model | Excluded Fields |
|---|---|
| Device | id |
| Location | id, scan_state, error_message, job_policies, created_at, updated_at |
| Entry | id, indexed_at |
| Volume | id, is_online, last_seen_at, last_speed_test_at, tracked_at |
| ContentIdentity | id, mime_type_id, kind_id, entry_count, *_media_data_id, first_seen_at, last_verified_at |
| UserMetadata | id, created_at, updated_at |
| AuditLog | id, created_at, updated_at, job_id |
| Sidecar | id, source_entry_id |
All models sync automatically during creation, updates, and deletions. File indexing uses batch sync for both device-owned entries (StateBatch) and shared content identities (SharedChangeBatch) to reduce network overhead.
Deletion sync: Device-owned models (locations, entries, volumes) use cascading tombstones. The device_state_tombstones table tracks root UUIDs of deleted trees. Shared models use standard ChangeType::Delete in the peer log. Both mechanisms prune automatically once all devices have synced.
<Note>Extension sync framework is ready. SDK integration pending.</Note>
Extensions can define syncable models using the same infrastructure as core models. The registry pattern automatically handles new model types without code changes to the sync system.
Extensions will declare models with sync metadata:
#[model(
table_name = "album",
sync_strategy = "shared"
)]
struct Album {
#[primary_key]
id: Uuid,
title: String,
#[metadata]
metadata_id: i32,
}
The sync system will detect and register extension models at runtime, applying the same HLC-based conflict resolution and dependency ordering used for core models.
Sync behavior is controlled through a unified configuration system. All timing, batching, and retention parameters are configurable per library.
The system uses sensible defaults tuned for typical usage across LAN and internet connections:
SyncConfig {
batching: BatchingConfig {
backfill_batch_size: 10_000, // Records per backfill request
state_broadcast_batch_size: 1_000, // Device-owned records per broadcast
shared_broadcast_batch_size: 100, // Shared records per broadcast
max_snapshot_size: 100_000, // Max records in state snapshot
realtime_batch_max_entries: 100, // Max entries before flush
realtime_batch_flush_interval_ms: 50, // Auto-flush interval (ms)
},
retention: RetentionConfig {
strategy: AcknowledgmentBased,
tombstone_max_retention_days: 7, // Hard limit for tombstone pruning
peer_log_max_retention_days: 7, // Hard limit for peer log pruning
force_full_sync_threshold_days: 25, // Force full sync if watermark older
},
network: NetworkConfig {
message_timeout_secs: 30, // Timeout for sync messages
backfill_request_timeout_secs: 60, // Timeout for backfill requests
sync_loop_interval_secs: 5, // Sync loop check interval
connection_check_interval_secs: 10, // How often to check peer connectivity
},
monitoring: MonitoringConfig {
pruning_interval_secs: 3600, // How often to prune sync.db (1 hour)
enable_metrics: true, // Enable sync metrics collection
metrics_log_interval_secs: 300, // Persist metrics every 5 minutes
},
}
Batching controls how many records are processed at once. Larger batches improve throughput but increase memory usage. Real-time batching collects changes for a short interval before flushing to reduce network overhead during rapid operations.
Retention controls how long sync coordination data is kept. The acknowledgment-based strategy prunes tombstones and peer log entries as soon as all devices have synced past them. A 7-day safety limit prevents offline devices from blocking pruning indefinitely.
Network controls timeouts and polling intervals. Shorter intervals provide faster sync but increase network traffic and CPU usage.
Monitoring controls metrics collection and sync database maintenance. Metrics track operations, latency, and data volumes for debugging and observability.
Aggressive is optimized for fast local networks with always-online devices. Small batches and frequent pruning minimize storage and latency.
Conservative handles unreliable networks and frequently offline devices. Large batches improve efficiency, and extended retention accommodates longer offline periods.
Mobile optimizes for battery life and bandwidth. Less frequent sync checks and longer retention reduce power consumption.
# Use a preset
sd sync config set --preset aggressive
# Customize individual settings
sd sync config set --batch-size 5000 --retention-days 14
# Per-library configuration
sd library "Photos" sync config set --preset mobile
Configuration can also be set via environment variables or a TOML file. The loading priority is: environment variables, config file, database, then defaults.
Library Sync is not traditional multi-leader replication. While multi-leader systems treat all nodes as equal for all data, Spacedrive recognizes a fundamental truth: /Users/alice/Photos can only be modified by Alice's laptop. This ownership model eliminates the entire class of filesystem conflicts that plague distributed systems.
Hybrid ownership splits the problem in two. Device-owned data (files, folders, volumes) uses single-leader-per-resource semantics. No CRDTs, no version vectors, no conflict resolution needed. The owner's state is authoritative. Shared resources (tags, collections) use multi-leader semantics with HLC-based ordering. Only this small subset of data requires conflict resolution, and last-write-wins is sufficient.
State-based sync handles device-owned data. Changes propagate via real-time broadcasts to connected peers. Historical data transfers via pull requests when devices join or reconnect.
Log-based sync handles shared resources. Hybrid Logical Clocks maintain causal ordering without clock synchronization. All devices converge to the same state regardless of network topology.
Automatic recovery handles offline periods through watermark-based incremental sync. Reconnecting devices receive only changes since their last sync, typically a small delta rather than the full dataset.
The system is production-ready with all core models syncing automatically. Extensions can use the same infrastructure to sync custom models.