docs/core/locations.mdx
Locations are directories that Spacedrive tracks and monitors. When you add a location, Spacedrive indexes its contents and watches for changes in real-time.
A location is any folder on your device that you want Spacedrive to index. Once added:
Locations have a sophisticated relationship with the file index:
When you add a location, Spacedrive creates a root entry in the entries table for the location directory itself. This root entry:
parent_id = NULL)entry_closure tabledirectory_paths table with its full filesystem path// Location points to root entry
Location {
entry_id: Some(123), // References entries table
// ... other fields
}
// Root entry for "/Users/alice/Documents"
Entry {
id: 123,
name: "Documents",
kind: Directory,
parent_id: None, // Root has no parent
// ... other fields
}
This design enables nested locations without duplicating the file tree. If you have:
/Users/alice/Documents (entry_id: 123)/Users/alice/Documents/Work (entry_id: 124)The entries table contains a single unified tree:
Entry(123): "Documents" parent_id: NULL
├─ Entry(124): "Work" parent_id: 123 ← Location B points here
│ └─ Entry(125): "report.pdf" parent_id: 124
└─ Entry(126): "Photos" parent_id: 123
└─ Entry(127): "sunset.jpg" parent_id: 126
Benefits:
Work/report.pdf exists once in the database, accessible from both locations/Photos but Shallow to /Photos/RAW without duplicating entriesWithout this design, overlapping locations would require duplicating all entries, leading to sync conflicts and wasted storage.
All files and folders within a location form a tree structure:
Location (entry_id: 123)
└─ Entry(123): "Documents" parent_id: NULL
├─ Entry(124): "Work" parent_id: 123
│ └─ Entry(125): "report.pdf" parent_id: 124
└─ Entry(126): "Photos" parent_id: 123
└─ Entry(127): "sunset.jpg" parent_id: 126
The entry_closure table enables efficient queries:
ancestor_id = 123Directory paths are stored separately for performance:
-- directory_paths table
entry_id | path
---------|---------------------------
123 | /Users/alice/Documents
124 | /Users/alice/Documents/Work
126 | /Users/alice/Documents/Photos
This avoids storing redundant path information on every file entry and makes path-based lookups significantly faster. Only directories are stored in this table, not individual files.
Entries don't have a device_id field. Instead, ownership is inherited from the location:
// Location owns all its entries
Location {
device_id: 42, // Device that owns this location
entry_id: 123, // Root of the entry tree
}
// Entries don't store device_id
Entry {
id: 124,
parent_id: 123, // Part of location's tree
// NO device_id field
}
Why this matters for sync:
Efficient ownership queries - Find all entries owned by a device via entry_closure:
SELECT e.* FROM entries e
INNER JOIN entry_closure ec ON e.id = ec.descendant_id
WHERE ec.ancestor_id IN (
SELECT entry_id FROM locations WHERE device_id = ?
);
No redundant storage - Avoids storing device_id on millions of entry records
Instant ownership transfer - When you move an external drive between devices, just update the location's device_id. All millions of files instantly change ownership without touching the entries table. See Library Sync - Portable Volumes for details.
Sync implications:
pub struct Location {
pub id: i32, // Database primary key
pub uuid: Uuid, // Unique identifier
pub device_id: i32, // Device that owns this location
pub entry_id: Option<i32>, // Root entry for this location's tree
pub name: Option<String>, // Display name
pub index_mode: String, // "shallow" | "content" | "deep"
pub scan_state: String, // "pending" | "scanning" | "completed" | "error"
pub last_scan_at: Option<DateTime<Utc>>,
pub error_message: Option<String>,
pub total_file_count: i64,
pub total_byte_size: i64,
pub job_policies: Option<String>, // JSON-serialized policies
pub created_at: DateTime<Utc>,
pub updated_at: DateTime<Utc>,
}
Choose how deeply Spacedrive analyzes your files:
Locations track their scanning status:
pub enum ScanState {
Idle, // Not scanning
Scanning { progress: u8 }, // Currently scanning (0-100%)
Completed, // Scan finished successfully
Failed, // Scan encountered errors
Paused, // Scan was paused
}
# Add a location
spacedrive location add ~/Documents --name "Documents"
# Add with deep indexing
spacedrive location add ~/Photos --name "Photos" --mode deep
# List all locations
spacedrive location list
let (location_id, job_id) = location_manager
.add_location(
library,
PathBuf::from("/Users/alice/Desktop"),
Some("Desktop".to_string()),
device_id,
IndexMode::Content,
)
.await?;
The location creation process is atomic (uses database transactions):
parent_id = NULL (this is a root)indexed_at to enable sync emissionentry_closure tabledirectory_paths tableentry_idscan_state = "pending"If any step fails, the entire transaction rolls back.
The watcher service provides real-time monitoring across all platforms.
macOS: Uses FSEvents for efficient volume-level monitoring Linux: Uses inotify for precise file-level events Windows: Uses ReadDirectoryChangesW for real-time updates
The watcher detects these file system changes:
pub enum WatcherEvent {
Create, // New file or directory
Modify, // Content or metadata changed
Remove, // File or directory deleted
Rename { // Move or rename operation
from: PathBuf,
to: PathBuf,
},
}
The watcher respects the indexer rules configured for each location. These rules determine which files and directories are tracked:
Available Indexer Rules:
no_system_files - Ignores .DS_Store, Thumbs.db, and other OS-generated filesno_hidden - Ignores hidden files (dotfiles) except important ones like .gitignoreno_git - Ignores .git directoriesgitignore - Respects .gitignore patterns in the directory treeonly_images - Only processes image filesno_dev_dirs - Ignores node_modules/, target/, .cache/, and other build directoriesDefault Configuration:
RuleToggles {
no_system_files: true, // Skip OS junk
no_hidden: false, // Include dotfiles
no_git: true, // Skip .git
gitignore: true, // Respect .gitignore
only_images: false, // Index all file types
no_dev_dirs: true, // Skip build artifacts
}
When indexer rules integration is complete, the watcher will automatically filter events based on each location's configured rules, ensuring consistency between initial indexing and real-time change detection.
Configure the watcher for your needs:
LocationWatcherConfig {
debounce_duration: Duration::from_millis(100), // Event consolidation
event_buffer_size: 1000, // Queue size
debug_mode: false, // Detailed logging
}
For large directories (>100k files):
For network drives:
For SSDs vs HDDs:
Subscribe to location events in your code:
let mut events = event_bus.subscribe();
while let Ok(event) = events.recv().await {
match event {
Event::EntryCreated { path, .. } => {
println!("New file: {}", path);
}
Event::EntryModified { path, .. } => {
println!("File changed: {}", path);
}
_ => {}
}
}
When a file changes:
Temporarily stop monitoring without removing:
spacedrive location pause <location-id>
Change location configuration:
# Disable watching
spacedrive location update <location-id> --watch false
# Change index mode
spacedrive location update <location-id> --mode shallow
Stop tracking a directory:
spacedrive location remove <location-id>
If watching causes high CPU:
# Find active locations
spacedrive location list --verbose
# Disable problematic location
spacedrive location update <id> --watch false
If file changes aren't detected:
If you see the same change multiple times:
Enable detailed logging:
# Set debug mode
export SPACEDRIVE_WATCHER_DEBUG=1
# Run with verbose logging
spacedrive --log-level debug
echo 524288 | sudo tee /proc/sys/fs/inotify/max_user_watches/Documents and /Documents/Work as separate locations
Create .spacedriveignore files to exclude:
# Build artifacts
node_modules/
target/
dist/
# Cache directories
.cache/
*.tmp
# Large generated files
*.log
*.dump
For network-attached storage:
Critical events are processed first:
Locations utilize three main tables:
locations - Location metadata
CREATE TABLE locations (
id INTEGER PRIMARY KEY,
uuid BLOB NOT NULL UNIQUE,
device_id INTEGER NOT NULL,
entry_id INTEGER, -- References entries(id)
name TEXT,
index_mode TEXT NOT NULL,
scan_state TEXT NOT NULL,
-- ... other fields
FOREIGN KEY (device_id) REFERENCES devices(id),
FOREIGN KEY (entry_id) REFERENCES entries(id)
);
entries - All files and directories (see Data Model)
entry_closure - Transitive closure table for hierarchy (see Data Model)
directory_paths - Directory path cache
CREATE TABLE directory_paths (
entry_id INTEGER PRIMARY KEY, -- References entries(id)
path TEXT NOT NULL,
FOREIGN KEY (entry_id) REFERENCES entries(id)
);
Find all entries in a location:
SELECT e.* FROM entries e
INNER JOIN entry_closure ec ON e.id = ec.descendant_id
WHERE ec.ancestor_id = (
SELECT entry_id FROM locations WHERE uuid = ?
);
Get full path for a directory:
SELECT path FROM directory_paths
WHERE entry_id = ?;
Get full path for a file:
-- Walk up parent chain and concatenate with directory path
SELECT dp.path || '/' || e.name as full_path
FROM entries e
LEFT JOIN entries parent ON e.parent_id = parent.id
LEFT JOIN directory_paths dp ON parent.id = dp.entry_id
WHERE e.id = ?;