packages/super-sync-server/docs/backup-and-recovery.md
Super Productivity uses an append-only operation log for sync. Every client (desktop, mobile, web) keeps a full copy of its data in local IndexedDB. The server is a relay — clients are the source of truth, not the server.
This means disaster recovery is simpler than in a traditional server-authoritative system: as long as one client device survives, all data can be recovered.
| Data | Where it lives | Why back it up |
|---|---|---|
| User accounts (email, password hash) | Server only | Users can't authenticate without this |
| Passkeys (WebAuthn credentials) | Server only | Can't be regenerated |
| Operation log | Server + all clients | Last resort if all client devices are lost |
| Task/project/tag data | Derived from operation log | Clients reconstruct from ops |
The backup script creates two dumps:
supersync_*.sql.gz) — complete database including all operations (~300MB+ for active instances)supersync_accounts_*.sql.gz) — just users and passkeys tables (tiny, <1MB)# Run manually
./scripts/backup.sh
# Set up daily cron at 3 AM with 3-day retention
(crontab -l 2>/dev/null; echo "0 3 * * * RETENTION_DAYS=3 /path/to/scripts/backup.sh >> /var/log/supersync-backup.log 2>&1") | crontab -
Backups are saved to backups/ next to the scripts directory.
| Variable | Default | Description |
|---|---|---|
BACKUP_DIR | ../backups | Where to store backup files |
RETENTION_DAYS | 14 | Delete backups older than this |
DB_CONTAINER | supersync-postgres | Docker container name |
POSTGRES_USER | supersync | Database user |
POSTGRES_DB | supersync | Database name |
RCLONE_REMOTE | (empty) | Optional rclone remote for off-site upload |
# Install rclone
curl https://rclone.org/install.sh | sudo bash
# Configure a remote (e.g., Backblaze B2)
rclone config
# Run backup with upload
RCLONE_REMOTE=b2:my-bucket/supersync ./scripts/backup.sh --upload
This is the simplest and most reliable recovery method when at least one client device has been online recently.
How it works:
Steps:
# 1. Restore accounts from backup
gunzip -c backups/supersync_accounts_YYYYMMDD_HHMMSS.sql.gz | \
docker exec -i supersync-postgres psql -U supersync supersync
# 2. That's it — clients will re-sync automatically when they connect
Why this is preferred:
SYNC_IMPORT_EXISTS conflicts that occur with partial restoressupersync-server-backup-revert.spec.ts)Use this only if all client devices are lost (no client can re-upload data).
# 1. Stop the server
docker compose stop supersync
# 2. Drop existing data and restore the full dump
docker exec -i supersync-postgres psql -U supersync supersync \
-c "DROP SCHEMA public CASCADE; CREATE SCHEMA public;"
gunzip -c backups/supersync_YYYYMMDD_HHMMSS.sql.gz | \
docker exec -i supersync-postgres psql -U supersync supersync
# 3. Restart the server
docker compose start supersync
Note: The database name (
supersyncabove) must match your deployment'sPOSTGRES_DBsetting. Check your.envordocker-compose.ymlfor the actual value.
Known limitation: If clients reconnect after a full restore, the server's existing SYNC_IMPORT operation can conflict with the client's gap detection mechanism (SYNC_IMPORT_EXISTS error). To resolve this, use the "Reset Account" feature in the app to clear server sync data, then re-sync.
Server is down / data lost
├── Do any client devices still have data?
│ ├── YES → Use accounts-only restore (recommended)
│ │ Clients will re-upload automatically
│ └── NO → Use full database restore (fallback)
│ Accept data loss since last backup
If your VPS hoster provides incremental backups (e.g., daily snapshots), these serve as an additional safety net. However:
The combination of pg_dump cron + hoster backups covers both scenarios well.
# Check backup exists and has reasonable size
ls -lh backups/
# Verify the dump contains valid SQL
gunzip -c backups/supersync_YYYYMMDD_HHMMSS.sql.gz | head -5
# Check cron is running
cat /var/log/supersync-backup.log
The backup recovery scenarios are covered by automated tests in e2e/tests/sync/supersync-server-backup-revert.spec.ts: