packages/super-sync-server/README.md
A custom, high-performance synchronization server for Super Productivity.
Note: This server implements a custom operation-based synchronization protocol (Event Sourcing), not WebDAV. It is designed specifically for the Super Productivity client's efficient sync requirements.
Related Documentation:
- Authentication Architecture - Auth design decisions and security features
- Operation Log Architecture - Client-side architecture
- Server Architecture Diagrams - Visual diagrams
- Backup & Disaster Recovery - Backup setup and recovery procedures
The server uses an Append-Only Log architecture backed by PostgreSQL (via Prisma):
server_seq to each operation.X".| Principle | Description |
|---|---|
| Server-Authoritative | Server assigns monotonic sequence numbers for total ordering |
| Client-Side Conflict Resolution | Server stores operations as-is; clients detect and resolve conflicts |
| E2E Encryption Support | Payloads can be encrypted client-side; server treats them as opaque blobs |
| Idempotent Uploads | Request ID deduplication prevents duplicate operations |
The easiest way to run the server is using the provided Docker Compose configuration.
# 1. Copy environment example
cp .env.example .env
# 2. Configure .env (Set JWT_SECRET, DOMAIN, POSTGRES_PASSWORD)
nano .env
# 3. Start the stack (Server + Postgres + Caddy)
docker-compose up -d
# Install dependencies
npm install
# Generate Prisma Client
npx prisma generate
# Set up .env
cp .env.example .env
# Edit .env to point to your PostgreSQL instance (DATABASE_URL)
# Push schema to DB
npx prisma db push
# Start the server
npm run dev
# Or build and run
npm run build
npm start
All configuration is done via environment variables.
| Variable | Default | Description |
|---|---|---|
PORT | 1900 | Server port |
DATABASE_URL | - | PostgreSQL connection string (e.g. postgresql://user:pass@localhost:5432/db) |
JWT_SECRET | - | Required. Secret for signing JWTs (min 32 chars) |
PUBLIC_URL | - | Required. Public URL used for email links (e.g. https://sync.example.com) |
CORS_ORIGINS | https://app.super-productivity.com | Allowed CORS origins |
SMTP_HOST | - | SMTP Server for emails |
POST /api/register
Content-Type: application/json
{
"email": "[email protected]",
"password": "yourpassword"
}
Response:
{
"message": "User registered. Please verify your email.",
"id": 1,
"email": "[email protected]"
}
POST /api/login
Content-Type: application/json
{
"email": "[email protected]",
"password": "yourpassword"
}
Response:
{
"token": "jwt-token",
"user": { "id": 1, "email": "[email protected]" }
}
All sync endpoints require Bearer authentication: Authorization: Bearer <jwt-token>
Send new changes to the server.
POST /api/sync/ops
Get changes from other devices.
GET /api/sync/ops?sinceSeq=123
Get the full current state (optimized).
GET /api/sync/snapshot
Check pending operations and device status.
GET /api/sync/status
In Super Productivity, configure the Custom Sync provider with:
https://sync.your-domain.com (or your deployed URL)The server includes scripts for administrative tasks. These use the configured database.
# Delete a user account
npm run delete-user -- [email protected]
# Clear sync data (preserves account)
npm run clear-data -- [email protected]
# Clear ALL sync data (dangerous)
npm run clear-data -- --all
POST /api/sync/ops)Request body:
{
"ops": [
{
"id": "uuid-v7",
"opType": "UPD",
"entityType": "TASK",
"entityId": "task-123",
"payload": { "changes": { "title": "New title" } },
"vectorClock": { "clientA": 5 },
"timestamp": 1701234567890,
"schemaVersion": 1
}
],
"clientId": "clientA",
"lastKnownSeq": 100
}
Response:
{
"results": [{ "opId": "uuid-v7", "accepted": true, "serverSeq": 101 }],
"newOps": [],
"latestSeq": 101
}
GET /api/sync/ops)Query parameters:
sinceSeq (required): Server sequence number to start fromlimit (optional): Max operations to return (default: 500)POST /api/sync/snapshot)Used for full-state operations (BackupImport, SyncImport, Repair):
{
"state": {
/* Full AppDataComplete */
},
"clientId": "clientA",
"reason": "initial",
"vectorClock": { "clientA": 10 },
"schemaVersion": 1
}
| Feature | Implementation |
|---|---|
| Authentication | JWT Bearer tokens in Authorization header |
| Timing Attack Mitigation | Dummy hash comparison on invalid users |
| Input Validation | Operation ID, entity ID, schema version validated |
| Rate Limiting | Configurable per-user limits |
| Vector Clock Sanitization | Limited to 50 entries, 255 char keys |
| Entity Type Allowlist | Prevents injection of invalid entity types |
| Request Deduplication | Prevents duplicate operations on retry |
When deploying multiple server instances behind a load balancer, be aware of these limitations:
Issue: WebAuthn challenges are stored in an in-memory Map, which doesn't work across instances.
Symptom: Passkey registration/login fails if the challenge generation request hits instance A but verification hits instance B.
Solution for multi-instance:
Current status: A warning is logged at startup in production if in-memory storage is used.
Issue: Concurrent snapshot generation prevention uses an in-memory Map.
Symptom: Same user may trigger duplicate snapshot computations across different instances.
Impact: Performance only (no data corruption) - snapshots are deterministic.
Solution for multi-instance:
For single-instance deployments, these limitations do not apply. The current implementation is fully functional and well-tested for single-instance use.