docs/sync-and-op-log/operation-log-architecture.md
Status: Parts A, B, C, D Complete (single-version; cross-version sync A.7.11 documented, not implemented)
Branch: feat/operation-logs
Last Updated: January 8, 2026
Note: As of January 2026, the legacy PFAPI system has been completely eliminated. All sync providers (SuperSync, WebDAV, Dropbox, LocalFile) now use the unified operation log system.
The Operation Log fundamentally changes how the app treats data. Instead of treating the database as a "bucket" where we overwrite data (e.g., "The task title is now X"), we treat it as a timeline of events (e.g., "At 10:00 AM, User changed task title to X").
DELETE operation.When a user performs an action (like ticking a checkbox):
TaskUpdate).Operation object. This object includes:
{ isDone: true }).Operation is immediately appended to the SUP_OPS table in IndexedDB. This is very fast because we're just adding a small JSON object, not rewriting a huge file.Replaying every operation since the beginning would be too slow. We use Snapshots to speed this up:
The Operation Log enables two types of synchronization:
A. True "Server Sync" (The Modern Way) This is efficient and precise.
Operations, not full files. This saves massive amounts of bandwidth.B. "File-Based Sync" (Dropbox, WebDAV, Local File) This uses a single-file approach with embedded operations.
sync-data.json file containing: full state snapshot + recent operations bufferThe system assumes data corruption is inevitable (power loss, bad sync, cosmic rays) and builds defenses against it:
REPAIR Operation.REPAIR op is saved to the log. This means you can look back and see exactly when and why the system modified your data automatically.If we kept every operation forever, the database would grow huge.
The Operation Log serves four distinct purposes:
| Purpose | Description | Status |
|---|---|---|
| A. Local Persistence | Fast writes, crash recovery, event sourcing | Complete ✅ |
| B. File-Based Sync | Single-file sync for WebDAV/Dropbox/LocalFile | Complete ✅ |
| C. Server Sync | Upload/download individual operations (SuperSync) | Complete ✅ (single-version)¹ |
| D. Validation & Repair | Prevent corruption, auto-repair invalid state | Complete ✅ |
¹ Cross-version sync limitation: Part C is complete for clients on the same schema version. Cross-version sync (A.7.11) is not yet implemented—see A.7.11 Conflict-Aware Migration for guardrails.
✅ Migration Ready: Migration safety (A.7.12), tail ops consistency (A.7.13), and unified migration interface (A.7.15) are now implemented. The system is ready for schema migrations when
CURRENT_SCHEMA_VERSION > 1.
This document is structured around these four purposes. Most complexity lives in Part A (local persistence). Part B handles file-based sync via the FileBasedSyncAdapter. Part C handles operation-based sync with SuperSync server. Part D integrates validation and automatic repair.
┌───────────────────────────────────────────────────────────────────┐
│ User Action │
└───────────────────────────────────────────────────────────────────┘
▼
NgRx Store
(Runtime Source of Truth)
│
┌───────────────────┼───────────────────┐
▼ │ ▼
OpLogEffects │ Other Effects
│ │
├──► SUP_OPS ◄──────┘
│ (Local Persistence - Part A)
│
└──► Sync Providers
├── SuperSync (Part C - operation-based)
└── WebDAV/Dropbox/LocalFile (Part B - file-based)
The operation log is primarily a Write-Ahead Log (WAL) for local persistence. It provides:
// ops table - the event log
interface OperationLogEntry {
seq: number; // Auto-increment primary key
op: Operation; // The operation
appliedAt: number; // When applied locally
source: 'local' | 'remote';
syncedAt?: number; // For server sync (Part C)
rejectedAt?: number; // When rejected during conflict resolution
}
// state_cache table - periodic snapshots
interface StateCache {
state: AllSyncModels; // Full snapshot
lastAppliedOpSeq: number;
vectorClock: VectorClock; // Current merged vector clock
compactedAt: number; // When this snapshot was created
schemaVersion?: number; // Optional for backward compatibility
}
┌─────────────────────────────────────────────────────────────────────┐
│ IndexedDB │
├─────────────────────────────────────────────────────────────────────┤
│ 'SUP_OPS' database (Operation Log) │
│ │
│ ┌──────────────────────────────────────────────────────────┐ │
│ │ ops (event log) - Append-only operation log │ │
│ │ state_cache - Periodic state snapshots │ │
│ │ meta - Vector clocks, sync state │ │
│ │ archive_young - Recent archived tasks │ │
│ │ archive_old - Old archived tasks │ │
│ └──────────────────────────────────────────────────────────┘ │
│ │
│ ALL model data persisted here │
└─────────────────────────────────────────────────────────────────────┘
Key insight: All application data is persisted in the SUP_OPS database via the operation log system.
User Action
│
▼
NgRx Dispatch (action)
│
├──► Reducer updates state (optimistic, in-memory)
│
└──► OperationLogEffects
│
├──► Filter: action.meta.isPersistent === true?
│ └──► Skip if false or missing
│
├──► Filter: action.meta.isRemote === true?
│ └──► Skip (prevents re-logging sync/replay)
│
├──► Convert action to Operation
│
├──► Append to SUP_OPS.ops (disk)
│
├──► Increment META_MODEL.vectorClock (Part B bridge)
│
└──► Broadcast to other tabs
interface Operation {
id: string; // UUID v7 (time-ordered)
actionType: string; // NgRx action type
opType: OpType; // CRT | UPD | DEL | MOV | BATCH
entityType: EntityType; // TASK | PROJECT | TAG | NOTE | ...
entityId?: string; // Affected entity ID
entityIds?: string[]; // For batch operations
payload: unknown; // Action payload
clientId: string; // Device ID
vectorClock: VectorClock; // Per-op causality (for Part C)
timestamp: number; // Wall clock (epoch ms)
schemaVersion: number; // For migrations
}
type OpType =
| 'CRT' // Create
| 'UPD' // Update
| 'DEL' // Delete
| 'MOV' // Move (list reordering)
| 'BATCH' // Bulk operations (import, mass update)
| 'SYNC_IMPORT' // Full state import from remote sync
| 'BACKUP_IMPORT' // Full state import from backup file
| 'REPAIR'; // Auto-repair operation with full repaired state
type EntityType =
| 'TASK'
| 'PROJECT'
| 'TAG'
| 'NOTE'
| 'GLOBAL_CONFIG'
| 'SIMPLE_COUNTER'
| 'WORK_CONTEXT'
| 'TASK_REPEAT_CFG'
| 'ISSUE_PROVIDER'
| 'PLANNER'
| 'MENU_TREE'
| 'METRIC'
| 'BOARD'
| 'REMINDER'
| 'PLUGIN_USER_DATA'
| 'PLUGIN_METADATA'
| 'MIGRATION'
| 'RECOVERY'
| 'ALL';
Actions are persisted based on explicit meta.isPersistent: true:
// persistent-action.interface.ts
export interface PersistentActionMeta {
isPersistent?: boolean; // When true, action is persisted
entityType: EntityType;
entityId?: string;
entityIds?: string[]; // For batch operations
opType: OpType;
isRemote?: boolean; // TRUE if from Sync (prevents re-logging)
isBulk?: boolean; // TRUE for batch operations
}
// Type guard - only actions with explicit isPersistent: true are persisted
export const isPersistentAction = (action: Action): action is PersistentAction => {
const a = action as PersistentAction;
return !!a.meta && a.meta.isPersistent === true;
};
Actions that should NOT be persisted:
App Startup
│
▼
OperationLogHydratorService
│
├──► Load snapshot from SUP_OPS.state_cache
│ │
│ └──► If no snapshot: Genesis migration from 'pf'
│
├──► Run schema migration if needed
│
├──► Dispatch loadAllData(snapshot, { isHydration: true })
│
└──► Load tail ops (seq > snapshot.lastAppliedOpSeq)
│
├──► If last op is SyncImport: load directly (skip replay)
│
├──► Otherwise: Replay ops (prevents re-logging via isRemote flag)
│
└──► If replayed >10 ops: Save new snapshot for faster future loads
Two optimizations speed up hydration:
Skip replay for SyncImport: When the last operation in the log is a SyncImport (full state import), the hydrator loads it directly instead of replaying all preceding operations. This significantly speeds up initial load after imports or syncs.
Save snapshot after replay: After replaying more than 10 tail operations, a new state cache snapshot is saved. This avoids replaying the same operations on subsequent startups.
On first startup (SUP_OPS empty), the system initializes with default state:
async createGenesisSnapshot(): Promise<void> {
// Initialize with default state or migrate from legacy if present
const initialState = await this.getInitialState();
// Create initial snapshot
await this.opLogStore.saveStateCache({
state: initialState,
lastAppliedOpSeq: 0,
vectorClock: {},
compactedAt: Date.now(),
schemaVersion: CURRENT_SCHEMA_VERSION
});
}
For users upgrading from legacy formats, ServerMigrationService handles the migration during first sync.
Without compaction, the op log grows unbounded. Compaction:
async compact(): Promise<void> {
// 1. Acquire lock
await this.lockService.request('sp_op_log_compact', async () => {
// 2. Read current state from NgRx (via delegate)
const currentState = await this.storeDelegate.getAllSyncModelDataFromStore();
// 3. Save new snapshot
const lastSeq = await this.opLogStore.getLastSeq();
await this.opLogStore.saveStateCache({
state: currentState,
lastAppliedOpSeq: lastSeq,
vectorClock: await this.opLogStore.getCurrentVectorClock(),
compactedAt: Date.now(),
schemaVersion: CURRENT_SCHEMA_VERSION
});
// 4. Delete old ops (sync-aware)
// Only delete ops that have been synced AND are older than retention window
const retentionWindowMs = 7 * 24 * 60 * 60 * 1000; // 7 days
const cutoff = Date.now() - retentionWindowMs;
await this.opLogStore.deleteOpsWhere(
(entry) =>
!!entry.syncedAt && // never drop unsynced ops
entry.appliedAt < cutoff &&
entry.seq <= lastSeq
);
});
}
| Setting | Value | Description |
|---|---|---|
| Compaction trigger | 500 ops | Ops before snapshot |
| Retention window | 7 days | Keep recent synced ops |
| Emergency retention | 1 day | Shorter retention for quota exceeded |
| Compaction timeout | 25 sec | Abort if exceeds (prevents lock expiration) |
| Max compaction failures | 3 | Failures before user notification |
| Unsynced ops | ∞ | Never delete unsynced ops |
| Max download ops in memory | 50,000 | Bounds memory during API download |
| Remote file retention | 14 days | Server-side operation file retention |
| Max remote files to keep | 100 | Minimum recent files on server |
| Max conflict retry attempts | 5 | Retries before rejecting failed ops |
| Max rejected ops before warning | 10 | Threshold for user notification |
| Lock timeout | 30 sec | localStorage fallback lock timeout |
| Lock acquire timeout | 60 sec | Max wait to acquire a lock |
| Max download retries | 3 | Retry attempts for failed file downloads |
| Max ops for snapshot (server) | 100,000 | Server-side memory protection for snapshot gen |
// Primary: Web Locks API
await navigator.locks.request('sp_op_log_write', async () => {
await this.writeOperation(op);
});
// Fallback: localStorage mutex (for older WebViews)
When one tab writes an operation:
isRemote=true to prevent re-logging)// Tab A writes
this.broadcastChannel.postMessage({ type: 'NEW_OP', op });
// Tab B receives
this.broadcastChannel.onmessage = (event) => {
if (event.data.type === 'NEW_OP') {
const action = convertOpToAction(event.data.op); // Sets isRemote: true
this.store.dispatch(action);
}
};
When operations are synced from remote clients (other tabs or devices), they are dispatched to NgRx with meta.isRemote: true. Effects that perform side effects (snacks, work logs, notifications, plugin hooks) should NOT run for these remote operations because:
The LOCAL_ACTIONS injection token provides a pre-filtered Actions stream that excludes remote operations:
// src/app/util/local-actions.token.ts
import { inject, InjectionToken } from '@angular/core';
import { Actions } from '@ngrx/effects';
import { Action } from '@ngrx/store';
import { Observable } from 'rxjs';
import { filter } from 'rxjs/operators';
export const LOCAL_ACTIONS = new InjectionToken<Observable<Action>>('LOCAL_ACTIONS', {
providedIn: 'root',
factory: () => {
const actions$ = inject(Actions);
return actions$.pipe(filter((action: Action) => !(action as any).meta?.isRemote));
},
});
Use LOCAL_ACTIONS instead of Actions for effects that should NOT run for remote operations:
@Injectable()
export class MyEffects {
private _actions$ = inject(LOCAL_ACTIONS); // LOCAL actions only (excludes isRemote)
// ✅ Use LOCAL_ACTIONS for side effects
showSnack$ = createEffect(
() =>
this._localActions$.pipe(
ofType(TaskSharedActions.updateTask),
filter((action) => action.task.changes.isDone === true),
tap(() => this.snackService.open({ msg: 'Task completed!' })),
),
{ dispatch: false },
);
// ✅ Use regular actions$ for state updates that should apply everywhere
moveTaskToList$ = createEffect(() =>
this._actions$.pipe(
ofType(moveTaskInTodayList),
// This dispatches another action - should work for all sources
map(({ taskId }) => TaskSharedActions.updateTask({ ... })),
),
);
}
| Scenario | Use LOCAL_ACTIONS? | Reason |
|---|---|---|
| Show snackbar/toast | ✅ Yes | UI notification already happened on original client |
| Post work log to Jira/OpenProject | ✅ Yes | External API call already made |
| Play sound | ✅ Yes | Audio feedback is local-only |
| Update Electron taskbar | ✅ Yes | Desktop UI is local-only |
| Dispatch plugin hooks | ✅ Yes | Plugins already ran on original client |
| Update another entity in store | ❌ No | State change should apply everywhere |
| Navigate/route change | ✅ Yes | Navigation is local-only |
| Dispatch cascading actions | ⚠️ Depends | If it modifies state: No. If side-effect only: Yes |
1. Detect: Hydration fails or returns empty/invalid state
2. Check legacy 'pf' database for data
3. If found: Run recovery migration with that data
4. If not: Check remote sync for data
5. If remote has data: Force sync download
6. If all else fails: User must restore from backup
async hydrateStore(): Promise<void> {
try {
const snapshot = await this.opLogStore.loadStateCache();
if (!snapshot || !this.isValidSnapshot(snapshot)) {
await this.attemptRecovery();
return;
}
// Normal hydration...
} catch (e) {
await this.attemptRecovery();
}
}
private async attemptRecovery(): Promise<void> {
// 1. Try backup from state cache
const backupState = await this.tryLoadBackupSnapshot();
if (backupState) {
await this.recoverFromBackup(backupState);
return;
}
// 2. Try remote sync (triggers ServerMigrationService if needed)
// 3. Show error to user
}
When Super Productivity's data model changes (new fields, renamed properties, restructured entities), schema migrations ensure existing data remains usable after app updates.
Current Status: Migration infrastructure is implemented, but no actual migrations exist yet. The
MIGRATIONSarray is empty andCURRENT_SCHEMA_VERSION = 1. This section documents the designed behavior for when migrations are needed.
CURRENT_SCHEMA_VERSION is defined in src/app/op-log/store/schema-migration.service.ts:
export const CURRENT_SCHEMA_VERSION = 1;
export const MIN_SUPPORTED_SCHEMA_VERSION = 1;
export const MAX_VERSION_SKIP = 5; // Max versions ahead we'll attempt to load
| Concept | Description |
|---|---|
| Schema Version | Integer tracking current data model version (stored in ops + snapshots) |
| Migration | Function transforming state from version N to N+1 |
| Snapshot Boundary | Migrations run when loading snapshots, creating clean versioned checkpoints |
| Forward Compatibility | Newer apps can read older data (via migrations) |
| Backward Compatibility | Older apps receiving newer ops (via graceful degradation) |
┌─────────────────────────────────────────────────────────────────────┐
│ App Update Detected │
│ (schemaVersion mismatch) │
└─────────────────────────────────────────────────────────────────────┘
│
┌───────────────────┼───────────────────┐
▼ ▼ ▼
Load Snapshot Replay Ops Receive Remote Ops
(superseded version) (mixed versions) (newer/older version)
│ │ │
▼ ▼ ▼
Run migrations Apply ops as-is Migrate if needed
on full state (ops are additive) (full state imports)
When app starts and finds a snapshot with older schema version:
App Startup (schema v1 → v2)
│
▼
Load state_cache (v1 snapshot)
│
▼
Detect version mismatch: snapshot.schemaVersion < CURRENT_SCHEMA_VERSION
│
▼
Run migration chain: migrateV1ToV2(snapshot.state)
│
▼
Dispatch loadAllData(migratedState)
│
▼
Force new snapshot with schemaVersion = 2
│
▼
Continue with tail ops (ops after snapshot)
Operations in the log may have different schema versions. During replay:
// Operations are "additive" - they describe what changed, not full state
// Example: { opType: 'UPD', payload: { task: { id: 'x', changes: { title: 'new' } } } }
// Old ops apply to migrated state because:
// 1. Fields they reference still exist (or are mapped)
// 2. New fields have defaults filled by migration
// 3. Renamed fields are handled by migration aliases
async replayOperation(op: Operation, currentState: AppDataComplete): Promise<void> {
// Op schema version is informational - ops apply to current state structure
// The snapshot was already migrated to current schema
await this.operationApplier.applyOperations([op]);
}
Limitation: Operations are NOT migrated during replay. If a migration renames a field (e.g.,
estimate→timeEstimate), old operations referencingestimatewill apply that field to the entity, potentially causing data inconsistency. To avoid this:
- Prefer additive migrations - Add new fields with defaults rather than renaming
- Use aliases in reducers - If renaming is necessary, reducers should accept both old and new field names
- Force compaction after migration - Reduce the window of mixed-version operations
Operation-level migration (transforming old ops to new schema during replay) is listed as a future enhancement in A.7.9.
When clients run different Super Productivity versions, sync must handle version differences:
┌─────────────────────────────────────────────────────────────────────┐
│ Remote Sync Scenarios │
└─────────────────────────────────────────────────────────────────────┘
Scenario 1: Newer client receives older ops
──────────────────────────────────────────
Client v2 ◄─── ops from v1 client
│
└── Ops apply normally (additive changes to migrated state)
Missing new fields use defaults from migration
Scenario 2: Older client receives newer ops
──────────────────────────────────────────
Client v1 ◄─── ops from v2 client
│
├── Individual ops: Unknown fields ignored (graceful degradation)
│ { task: { id: 'x', changes: { title: 'a', newFieldV2: 'b' } } }
│ ↑ ignored by v1
│
└── Full state imports (SYNC_IMPORT): May fail validation
→ User prompted to update app or resolve manually
Scenario 3: Mixed version sync with conflicts
──────────────────────────────────────────
Client v1 conflicts with Client v2
│
└── Conflict resolution uses entity-level comparison
Version-specific fields handled during merge
When receiving full state from remote (e.g., SYNC_IMPORT from another client):
async handleFullStateImport(payload: { appDataComplete: AppDataComplete }): Promise<void> {
const { appDataComplete } = payload;
// 1. Detect schema version of incoming state (from schemaVersion field or structure)
const incomingVersion = appDataComplete.schemaVersion ?? detectSchemaVersion(appDataComplete);
if (incomingVersion < CURRENT_SCHEMA_VERSION) {
// 2a. Migrate incoming state up to current version
const migratedState = await this.migrateState(appDataComplete, incomingVersion);
this.store.dispatch(loadAllData({ appDataComplete: migratedState }));
} else if (incomingVersion > CURRENT_SCHEMA_VERSION + MAX_VERSION_SKIP) {
// 2b. Too far ahead - reject and prompt user to update
this.snackService.open({
type: 'ERROR',
msg: T.F.SYNC.S.VERSION_TOO_OLD,
actionStr: T.PS.UPDATE_APP,
actionFn: () => window.open(UPDATE_URL, '_blank'),
});
throw new Error(`Schema version ${incomingVersion} requires app update`);
} else if (incomingVersion > CURRENT_SCHEMA_VERSION) {
// 2c. Slightly ahead - attempt graceful load with warning
PFLog.warn('Received state from newer app version', { incomingVersion, current: CURRENT_SCHEMA_VERSION });
this.snackService.open({
type: 'WARN',
msg: T.F.SYNC.S.NEWER_VERSION_WARNING, // "Data from newer app version - some features may not work"
});
// Attempt load - unknown fields will be stripped by Typia validation
// This may cause data loss for fields the older app doesn't understand
this.store.dispatch(loadAllData({ appDataComplete }));
} else {
// 2d. Same version - direct load
this.store.dispatch(loadAllData({ appDataComplete }));
}
// 3. Save snapshot (always with current schema version)
await this.saveStateCache(/* current state with schemaVersion = CURRENT_SCHEMA_VERSION */);
}
Migrations are defined in src/app/op-log/store/schema-migration.service.ts.
How to Create a New Migration:
CURRENT_SCHEMA_VERSIONMIGRATIONS array with fromVersion, toVersion, description, migrate()interface SchemaMigration {
fromVersion: number;
toVersion: number;
description: string;
migrate: (state: unknown) => unknown;
migrateOperation?: (op: Operation) => Operation | null; // For field renames/removals
requiresOperationMigration: boolean;
}
Design Principles:
| Principle | Description |
|---|---|
| Additive changes preferred | Adding new optional fields with defaults is safest |
| Avoid breaking renames | Use aliases or transformations instead |
| Preserve unknown fields | Don't strip fields from newer versions |
| Idempotent migrations | Running twice should be safe |
Version Mismatch Handling: Remote data too new → prompt user to update app. Remote data too old → show error, may need manual intervention.
Note: The legacy PFAPI system has been removed (January 2026). This section documents historical migration paths.
For users upgrading from older versions (pre-operation-log), the ServerMigrationService handles migration:
SYNC_IMPORT operation with the imported stateKey file: src/app/op-log/sync/server-migration.service.ts
All future schema changes should use the Schema Migration system (A.7) described above.
Migration Safety (A.7.12) ✅ - Backup created before migration; rollback on failure.
Tail Ops Consistency (A.7.13) ✅ - Tail ops are migrated during hydration to match current schema.
Unified Migrations (A.7.15) ✅ - State and operation migrations linked in single SchemaMigration definition.
| Change Type | State Migration | Op Migration | Example |
|---|---|---|---|
| Add optional field | ✅ (set default) | ❌ (old ops just don't set it) | priority?: string |
| Rename field | ✅ (copy old→new) | ✅ (transform payload) | estimate → timeEstimate |
| Remove field/feature | ✅ (delete it) | ✅ (drop ops or strip field) | Remove pomodoro |
| Change field type | ✅ (convert) | ✅ (convert in payload) | "1h" → 3600 |
| Add entity type | ✅ (initialize) | ❌ (no old ops exist) | New Board entity |
Rule of thumb: Additive changes (new optional fields, new entities) don't need operation migration. Field renames/removals require it.
Status: Design ready, not implemented. Safe while CURRENT_SCHEMA_VERSION = 1.
Strategy: Receiver migrates incoming ops before conflict detection. Sender uploads ops as-is.
Interim guardrails:
schemaVersion > CURRENT + MAX_VERSION_SKIPRequired before: Any schema migration that renames/removes fields.
Status: Not yet implemented. This section documents the design for when
CURRENT_SCHEMA_VERSION > 1.
This guide provides the implementation roadmap for supporting sync between clients on different schema versions.
Bump the schema version when:
| Change Type | Bump Version? | Reason |
|---|---|---|
| Add optional field with default | ✅ Yes | Old clients won't set it; new clients need to know to apply defaults |
| Rename field | ✅ Yes | Operations need payload transformation |
| Remove field/feature | ✅ Yes | Operations may reference removed entities |
| Change field type | ✅ Yes | Payload values need conversion |
| Add new entity type | ✅ Yes | Old snapshots need initialization |
| Add new action type | ❌ No | Old clients ignore unknown actions |
| Bug fix in reducer | ❌ No | Not a schema change |
Decision rule: If the change affects how state_cache snapshots or operation payloads are structured, bump the version.
When receiving operations from older versions:
// In SchemaMigrationService.migrateOperation()
async migrateOperation(op: Operation): Promise<Operation | null> {
const opVersion = op.schemaVersion ?? 1;
if (opVersion >= CURRENT_SCHEMA_VERSION) {
return op; // Already current
}
// Run through migration chain
let migratedPayload = op.payload;
for (let v = opVersion; v < CURRENT_SCHEMA_VERSION; v++) {
const migration = MIGRATIONS.find(m => m.fromVersion === v);
if (migration?.migrateOperation) {
const result = migration.migrateOperation(op.actionType, migratedPayload);
if (result === null) {
// Operation should be dropped (removed feature)
return null;
}
migratedPayload = result;
}
}
return {
...op,
payload: migratedPayload,
schemaVersion: CURRENT_SCHEMA_VERSION,
};
}
The migration shield ensures conflict detection always compares apples-to-apples:
Remote Op (v1) Local Op (v2)
│ │
▼ │
┌─────────────────┐ │
│ Migration Layer │ │
│ (v1 → v2) │ │
└────────┬────────┘ │
│ │
▼ ▼
┌────────────────────────────┐
│ Conflict Detection │
│ (Both ops now v2) │
└────────────────────────────┘
Key invariant: Operations are ALWAYS migrated to current version BEFORE conflict detection. This ensures:
| Scenario | Behavior | User Experience |
|---|---|---|
| Newer client → Older client | Ops uploaded as-is; older client migrates on receive | Seamless |
| Older client → Newer client | Newer client migrates incoming ops | Seamless |
| Client too old (> MAX_VERSION_SKIP behind) | Reject ops, prompt update | "Please update app" modal |
| Client too new (server rejects) | N/A - server doesn't validate schema | No issue |
MAX_VERSION_SKIP = 5: Clients more than 5 versions behind cannot sync until updated. This bounds the migration chain complexity.
When deploying a schema migration:
Release new version with migration code
MIGRATIONS arrayCURRENT_SCHEMA_VERSIONGraceful degradation period
Monitoring (future)
op.schemaVersion distribution in server logsCleanup (optional, after many versions)
MIN_SUPPORTED_SCHEMA_VERSIONMIN_SUPPORTED_SCHEMA_VERSION// packages/shared-schema/src/migrations.ts
export const MIGRATIONS: SchemaMigration[] = [
{
fromVersion: 1,
toVersion: 2,
description: 'Rename task.estimate to task.timeEstimate',
// Migrate state snapshot
migrateState: (state: unknown): unknown => {
const s = state as AppDataComplete;
return {
...s,
task: {
...s.task,
entities: Object.fromEntries(
Object.entries(s.task.entities).map(([id, task]) => [
id,
{
...task,
timeEstimate: (task as any).estimate, // Copy old field
estimate: undefined, // Remove old field
},
]),
),
},
};
},
// Migrate operation payload
requiresOperationMigration: true,
migrateOperation: (actionType: string, payload: unknown): unknown | null => {
if (actionType.includes('[Task]') && payload && typeof payload === 'object') {
const p = payload as Record<string, unknown>;
if ('estimate' in p) {
return {
...p,
timeEstimate: p.estimate,
estimate: undefined,
};
}
}
return payload; // No change for other actions
},
},
];
Before releasing any migration:
Unit tests in schema-migration.service.spec.ts:
Integration tests in cross-version-sync.integration.spec.ts:
E2E tests (manual or automated):
File-based sync providers (WebDAV, Dropbox, LocalFile) use a single-file approach via the FileBasedSyncAdapter.
Sync Triggered (WebDAV/Dropbox/LocalFile)
│
▼
FileBasedSyncAdapter.downloadOps()
│
└──► Downloads sync-data.json from remote
│
├──► Contains: state snapshot + recent ops buffer
│
└──► Compares vector clocks for conflict detection
│
▼
Process new ops, merge state
│
▼
FileBasedSyncAdapter.uploadOps()
│
└──► Upload merged state + ops
Key file: src/app/op-log/sync-providers/file-based/file-based-sync-adapter.service.ts
interface FileBasedSyncData {
version: 2;
schemaVersion: number;
vectorClock: VectorClock;
syncVersion: number; // Content-based optimistic locking
lastSeq: number;
lastModified: number;
// Full state snapshot (~95% of file size)
state: AppDataComplete;
// Recent operations for conflict detection (last 200, ~5% of file)
recentOps: CompactOperation[];
// Checksum for integrity verification
checksum?: string;
}
When two clients sync concurrently, the adapter uses "piggybacking" to ensure no operations are lost:
// In FileBasedSyncAdapter.uploadOps()
const remote = await this._downloadRemoteData(provider);
if (remote && remote.syncVersion !== expectedSyncVersion) {
// Another client synced - find ops we haven't processed
const newOps = remote.recentOps.filter((op) => op.seq > lastProcessedSeq);
// Return these as "piggybacked" ops for the caller to process
return { localOps, newOps };
}
When remote data is downloaded, the sync system creates a SYNC_IMPORT operation:
async hydrateFromRemoteSync(downloadedMainModelData?: Record<string, unknown>): Promise<void> {
// 1. Create SYNC_IMPORT operation with downloaded state
const op: Operation = {
id: uuidv7(),
opType: 'SYNC_IMPORT',
entityType: 'ALL',
payload: downloadedMainModelData,
// ...
};
await this.opLogStore.append(op, 'remote');
// 2. Force snapshot for crash safety
await this.opLogStore.saveStateCache({
state: downloadedMainModelData,
lastAppliedOpSeq: lastSeq,
// ...
});
// 3. Dispatch to NgRx
this.store.dispatch(loadAllData({ appDataComplete: downloadedMainModelData }));
}
| Source | Create Op? | Force Snapshot? |
|---|---|---|
| Hydration (startup) | No | No |
| Remote sync download | Yes (SYNC_IMPORT) | Yes |
| Backup file import | Yes (BACKUP_IMPORT) | Yes |
Archive data (archiveYoung, archiveOld) is included in the state snapshot for file-based sync.
Archives are written directly to IndexedDB via ArchiveDbAdapter (bypassing the operation log for performance).
Archive Operation (e.g., archiving a completed task)
│
├──► 1. Update archive directly via ArchiveDbAdapter
│
└──► 2. On next sync, archive is included in state snapshot
Key files:
src/app/op-log/archive/archive-db-adapter.service.tssrc/app/op-log/archive/archive-operation-handler.service.tsFor server-based sync, the operation log IS the sync mechanism. Individual operations are uploaded/downloaded rather than full state snapshots.
| Aspect | File-Based Sync (Part B) | Server Sync (Part C) |
|---|---|---|
| What syncs | State snapshot + recent ops | Individual operations |
| Conflict detection | Vector clock on snapshot | Entity-level per-op |
| Transport | Single file (sync-data.json) | HTTP API |
| Op-log role | Builds snapshot from ops | IS the sync |
syncedAt tracking | Not needed | Required |
Providers that support operation sync implement OperationSyncCapable:
interface OperationSyncCapable {
supportsOperationSync: true;
uploadOps(
ops: SyncOperation[],
clientId: string,
lastKnownSeq: number,
): Promise<UploadResponse>;
downloadOps(
sinceSeq: number,
clientId?: string,
limit?: number,
): Promise<DownloadResponse>;
getLastServerSeq(): Promise<number>;
setLastServerSeq(seq: number): Promise<void>;
}
async uploadPendingOps(syncProvider: OperationSyncCapable): Promise<void> {
const pendingOps = await this.opLogStore.getUnsynced();
// Upload in batches (up to 25 ops per request)
for (const chunk of chunkArray(pendingOps, 25)) {
const response = await syncProvider.uploadOps(
chunk.map(entry => toSyncOperation(entry.op)),
clientId,
lastKnownServerSeq
);
// Mark accepted ops as synced
const acceptedSeqs = response.results
.filter(r => r.accepted)
.map(r => findEntry(r.opId).seq);
await this.opLogStore.markSynced(acceptedSeqs);
// Process piggybacked new ops from other clients
if (response.newOps?.length > 0) {
await this.processRemoteOps(response.newOps);
}
}
}
async downloadRemoteOps(syncProvider: OperationSyncCapable): Promise<void> {
let sinceSeq = await syncProvider.getLastServerSeq();
let hasMore = true;
while (hasMore) {
const response = await syncProvider.downloadOps(sinceSeq, undefined, 500);
// Filter already-applied ops
const newOps = response.ops.filter(op => !appliedOpIds.has(op.id));
await this.processRemoteOps(newOps);
sinceSeq = response.ops[response.ops.length - 1].serverSeq;
hasMore = response.hasMore;
await syncProvider.setLastServerSeq(response.latestSeq);
}
}
Operations that contain the full application state (SyncImport, BackupImport, Repair) can be very large (10-30MB+). Instead of sending these through the regular /api/sync/ops endpoint, they are uploaded via the dedicated /api/sync/snapshot endpoint which is optimized for large payloads.
Upload Flow
│
├──► Filter: Is opType in { SYNC_IMPORT, BACKUP_IMPORT, REPAIR }?
│ │
│ ├──► YES: Upload via /api/sync/snapshot
│ │ • Uses uploadSnapshot() method
│ │ • Maps opType to reason: initial, recovery, migration
│ │ • Supports E2E encryption
│ │
│ └──► NO: Upload via /api/sync/ops (normal batched upload)
// Full-state op types routed to snapshot endpoint
const FULL_STATE_OP_TYPES = new Set([
OpType.SyncImport,
OpType.BackupImport,
OpType.Repair,
]);
// In OperationLogUploadService._uploadPendingOpsViaApi():
const fullStateOps = pendingOps.filter((entry) =>
FULL_STATE_OP_TYPES.has(entry.op.opType as OpType),
);
const regularOps = pendingOps.filter(
(entry) => !FULL_STATE_OP_TYPES.has(entry.op.opType as OpType),
);
// Upload full-state ops via snapshot endpoint
for (const entry of fullStateOps) {
await syncProvider.uploadSnapshot(
entry.op.payload, // Full app state
entry.op.clientId,
mapOpTypeToReason(entry.op.opType), // 'initial' | 'recovery' | 'migration'
entry.op.vectorClock,
entry.op.schemaVersion,
);
}
// Upload regular ops in batches via ops endpoint
// ... (existing batch upload logic)
| OpType | Snapshot Reason | Use Case |
|---|---|---|
SYNC_IMPORT | initial | First sync or full state refresh |
BACKUP_IMPORT | recovery | Restoring from backup file |
REPAIR | recovery | Auto-repair with corrected state |
For providers without API support (WebDAV/Dropbox), operations are synced via files (OperationLogUploadService and OperationLogDownloadService handle this transparently):
ops/
├── manifest.json
├── ops_CLIENT1_1701234567890.json
├── ops_CLIENT1_1701234599999.json
└── ops_CLIENT2_1701234600000.json
The manifest tracks which operation files exist. Each file contains a batch of operations. The system supports both API-based sync and this file-based fallback.
Conflicts are detected using vector clocks at the entity level. Importantly, a conflict can only occur when there are pending (unsynced) local operations for an entity. If local has no pending changes for an entity, any remote operation is safe to apply - there's nothing local to conflict with.
async detectConflicts(remoteOps: Operation[]): Promise<ConflictResult> {
const localPendingByEntity = await this.opLogStore.getUnsyncedByEntity();
const appliedFrontierByEntity = await this.opLogStore.getEntityFrontier();
for (const remoteOp of remoteOps) {
const entityKey = `${remoteOp.entityType}:${remoteOp.entityId}`;
const localPendingOps = localPendingByEntity.get(entityKey) || [];
// FAST PATH: No pending local ops = no conflict possible
// Conflicts require concurrent modifications. If local hasn't modified
// this entity since last sync, any remote op can be applied safely.
if (localPendingOps.length === 0) {
nonConflicting.push(remoteOp);
continue;
}
// Build local frontier from applied + pending ops
const localFrontier = mergeClocks(
appliedFrontierByEntity.get(entityKey),
...localPendingOps.map(op => op.vectorClock)
);
const comparison = compareVectorClocks(localFrontier, remoteOp.vectorClock);
if (comparison === VectorClockComparison.CONCURRENT) {
conflicts.push({
entityType: remoteOp.entityType,
entityId: remoteOp.entityId,
localOps: localPendingOps,
remoteOps: [remoteOp],
suggestedResolution: 'manual'
});
} else {
nonConflicting.push(remoteOp);
}
}
return { nonConflicting, conflicts };
}
The key insight is that conflicts are about uncommitted changes, not historical state:
If Client A sends a delete operation for a task, and Client B has no pending ops for that task, Client B should simply apply the delete - there's no local work to lose. The snapshot/frontier vector clocks track history, not intent.
Conflicts are automatically resolved using Last-Write-Wins (LWW) strategy via ConflictResolutionService.autoResolveConflictsLWW():
async autoResolveConflictsLWW(conflicts: EntityConflict[], nonConflictingOps: Operation[]): Promise<void> {
for (const conflict of conflicts) {
const localMaxTimestamp = Math.max(...conflict.localOps.map(op => op.timestamp));
const remoteMaxTimestamp = Math.max(...conflict.remoteOps.map(op => op.timestamp));
if (localMaxTimestamp > remoteMaxTimestamp) {
// Local wins - create new UPDATE op with current entity state
const localWinOp = await this._createLocalWinUpdateOp(conflict);
// Reject both old local and remote ops
await this.opLogStore.markRejected([...localOpIds, ...remoteOpIds]);
// New op will sync local state on next upload
await this.opLogStore.append(localWinOp, 'local');
} else {
// Remote wins (including tie)
await this.operationApplier.applyOperations(conflict.remoteOps);
await this.opLogStore.markRejected(localOpIds);
}
}
}
When local state is newer, we can't just reject the remote ops - that would cause the local state to never sync to the server. Instead:
Date.now() would give unfair advantage in future conflicts)A warning-level log is emitted: OpLog.warn('LWW local wins - creating update op for ${entityType}:${entityId}')
When operations are rejected (either local or remote):
getUnsynced() excludes rejected ops (won't re-upload)When a moveToArchive operation conflicts with a field-level update (e.g., rename, time tracking changes), the archive operation always wins regardless of timestamps. This bypasses the normal LWW timestamp comparison because archiving represents explicit user intent that should not be reversed by a concurrent field update.
Rationale: If Client A archives a task and Client B concurrently renames it, the archive must win — otherwise, the LWW update would "resurrect" the archived task back into the active store by replacing its state.
Implementation: ConflictResolutionService checks whether either the local or remote side contains a TASK_SHARED_MOVE_TO_ARCHIVE action. If so, the archive side wins automatically, and a new archive operation is created with a merged vector clock (via _createArchiveWinOp()).
This is the first level of archive resurrection prevention. The second level is in the bulkOperationsMetaReducer (see Area 10: Bulk Application in the quick reference), which pre-scans operation batches for archive operations and skips any LWW Update operations targeting entities being archived in the same batch. This two-level defense handles the 3+ client scenario where LWW Updates can arrive before or after archive ops in the same batch.
Key files:
src/app/op-log/sync/conflict-resolution.service.ts — Archive-wins check and _createArchiveWinOp()src/app/op-log/apply/bulk-hydration.meta-reducer.ts — Pre-scan archive filteringThe SupersededOperationResolverService treats moveToArchive as a special case alongside DELETE operations. When a moveToArchive op is rejected by the server due to concurrent conflicts, it is re-created with a merged vector clock instead of being discarded.
This is necessary because moveToArchive removes entities from the NgRx store (via the archive reducer), so getCurrentEntityState() returns undefined for archived entities. Without this special handling, the superseded operation resolver would be unable to re-create the operation, and archived tasks would be lost.
Implementation: Before entity-by-entity processing, SupersededOperationResolverService identifies bulk semantic operations like moveToArchive and re-creates them with the original payload and a merged vector clock, preserving the full task data in MultiEntityPayload format.
Key file: src/app/op-log/sync/superseded-operation-resolver.service.ts
The lwwUpdateMetaReducer handles LWW Update actions (created when the local side wins a conflict) differently depending on the entity's storage pattern:
| Storage Pattern | Entity Types | LWW Update Behavior |
|---|---|---|
| Adapter | TASK, PROJECT, TAG, NOTE, TASK_REPEAT_CFG, etc. | Individual entity replacement via NgRx entity adapter (updateOne or addOne) |
| Singleton | GLOBAL_CONFIG, TIME_TRACKING, MENU_TREE, WORK_CONTEXT | Entire feature state replaced with the winning data |
| Unsupported | Map, array, virtual patterns | Logged as warning; not supported for LWW |
For adapter entities, the meta-reducer also syncs relationships (e.g., project.taskIds when projectId changes, tag.taskIds when tagIds changes, TODAY_TAG.taskIds when dueDay changes, parent.subTaskIds when parentId changes).
Key file: src/app/root-store/meta/task-shared-meta-reducers/lww-update.meta-reducer.ts
A non-blocking snack notification is shown after auto-resolution:
Operations may have dependencies (e.g., subtask requires parent task):
interface OperationDependency {
entityType: EntityType;
entityId: string;
mustExist: boolean; // Hard dependency
relation: 'parent' | 'reference';
}
// Operations with missing hard dependencies are queued for retry
// After MAX_RETRY_ATTEMPTS (3), they're marked as permanently failed
When a SYNC_IMPORT or BACKUP_IMPORT operation is received, it represents an explicit user action to restore all clients to a specific point in time. Operations created without knowledge of the import are filtered out.
Implementation: SyncImportFilterService.filterOpsInvalidatedBySyncImport()
Consider this scenario:
SYNC_IMPORT/BACKUP_IMPORT are explicit user actions to restore to a specific state. ALL operations without knowledge of the import are dropped - this ensures a true "restore to point in time" semantic.
We use vector clock comparison (not UUIDv7 timestamps) because vector clocks track causality ("did the client know about the import?") rather than wall-clock time (which can be affected by clock drift).
// In SyncImportFilterService.filterOpsInvalidatedBySyncImport()
for (const op of ops) {
// Full state import operations themselves are always valid
if (op.opType === OpType.SyncImport || op.opType === OpType.BackupImport) {
validOps.push(op);
continue;
}
// Use VECTOR CLOCK comparison to determine causality
const comparison = compareVectorClocks(op.vectorClock, latestImport.vectorClock);
if (
comparison === VectorClockComparison.GREATER_THAN ||
comparison === VectorClockComparison.EQUAL
) {
// Op was created by a client that had knowledge of the import
validOps.push(op);
} else {
// CONCURRENT or LESS_THAN: Op was created without knowledge of import
// Filter it to ensure clean slate semantics
invalidatedOps.push(op);
}
}
| Comparison | Meaning | Action |
|---|---|---|
GREATER_THAN | Op created after seeing import | ✅ Keep (has knowledge) |
EQUAL | Same causal history as import | ✅ Keep |
LESS_THAN | Op dominated by import | ❌ Drop (already captured) |
CONCURRENT | Op created without knowledge of import | ❌ Drop (clean slate) |
Example:
{A: 10, B: 5}{A: 11, B: 5} → GREATER_THAN → ✅ Keep (client A saw the import){B: 3} → LESS_THAN → ❌ Drop (dominated by import){C: 1} → CONCURRENT → ❌ Drop (client C didn't know about import)Why Drop CONCURRENT? An operation from a client that never saw the import may reference entities that no longer exist in the imported state. Dropping ensures the import truly restores all clients to the same point in time.
See operation-log-architecture-diagrams.md Section 2c for visual diagrams.
The operation log includes comprehensive validation and automatic repair to prevent data corruption and recover from invalid states.
Four validation checkpoints ensure data integrity throughout the operation lifecycle:
| Checkpoint | Location | When | Action on Failure |
|---|---|---|---|
| A | operation-log.effects.ts | Before IndexedDB write | Reject operation, log error, show snackbar |
| B | operation-log-hydrator.service.ts | After loading snapshot | Attempt repair, create REPAIR op |
| C | operation-log-hydrator.service.ts | After replaying tail ops | Attempt repair, create REPAIR op |
| D | operation-log-sync.service.ts | After applying remote ops | Attempt repair, create REPAIR op |
When validation fails at checkpoints B, C, or D, the system attempts automatic repair using the dataRepair() function. If repair succeeds, a REPAIR operation is created:
enum OpType {
// ... existing types
Repair = 'REPAIR', // Auto-repair operation with full repaired state
}
interface RepairPayload {
appDataComplete: AppDataCompleteNew; // Full repaired state
repairSummary: RepairSummary; // What was fixed
}
interface RepairSummary {
entityStateFixed: number; // Fixed ids/entities array sync
orphanedEntitiesRestored: number; // Tasks restored from archive
invalidReferencesRemoved: number; // Non-existent project/tag IDs removed
relationshipsFixed: number; // Project/tag ID consistency
structureRepaired: number; // Menu tree, inbox project creation
typeErrorsFixed: number; // Typia errors auto-fixed
}
Before writing to IndexedDB, operation payloads are validated in validate-operation-payload.ts:
validateOperationPayload(op: Operation): PayloadValidationResult {
// 1. Structural validation - payload must be object
// 2. OpType-specific validation:
// - CREATE: entity with valid 'id' field required
// - UPDATE: id + changes, or entity with id required
// - DELETE: entityId/entityIds required
// - MOVE: ids array required
// - BATCH: non-empty payload required
// - SYNC_IMPORT/BACKUP_IMPORT: appDataComplete structure required
// - REPAIR: skip (internally generated)
}
This validation is intentionally lenient - it checks structural requirements rather than deep entity validation. Full Typia validation happens at state checkpoints.
During hydration, state is validated at two points:
App Startup
│
▼
Load snapshot from state_cache
│
├──► CHECKPOINT B: Validate snapshot
│ │
│ └──► If invalid: repair + create REPAIR op
│
▼
Dispatch loadAllData(snapshot)
│
▼
Replay tail operations
│
└──► CHECKPOINT C: Validate current state
│
└──► If invalid: repair + create REPAIR op + dispatch repaired state
// In operation-log-hydrator.service.ts
private async _validateAndRepairState(state: AppDataCompleteNew): Promise<AppDataCompleteNew> {
if (this._isRepairInProgress) return state; // Prevent infinite loops
const result = this.validateStateService.validateAndRepair(state);
if (!result.wasRepaired) return state;
this._isRepairInProgress = true;
try {
await this.repairOperationService.createRepairOperation(
result.repairedState,
result.repairSummary,
);
return result.repairedState;
} finally {
this._isRepairInProgress = false;
}
}
After applying remote operations, state is validated:
operation-log-sync.service.ts - after applying non-conflicting ops (when no conflicts)conflict-resolution.service.ts - after resolving all conflictsThis catches:
Wraps validation and repair functionality using Typia and cross-model validation:
@Injectable({ providedIn: 'root' })
export class ValidateStateService {
validateState(state: AppDataCompleteNew): StateValidationResult {
// 1. Run Typia schema validation
const typiaResult = validateAllData(state);
// 2. Run cross-model relationship validation
// NOTE: isRelatedModelDataValid errors are now caught and treated as validation failures
// rather than crashing, allowing validateAndRepair to trigger dataRepair.
let isRelatedValid = true;
try {
isRelatedValid = isRelatedModelDataValid(state);
} catch (e) {
PFLog.warn(
'isRelatedModelDataValid threw an error, treating as validation failure',
e,
);
isRelatedValid = false;
}
return {
isValid,
typiaErrors,
crossModelError: !isRelatedValid
? 'isRelatedModelDataValid threw error'
: undefined,
};
}
validateAndRepair(state: AppDataCompleteNew): ValidateAndRepairResult {
// 1. Validate
// 2. If invalid: run dataRepair()
// 3. Re-validate repaired state
// 4. Return repaired state + summary
}
}
Creates REPAIR operations and notifies the user:
@Injectable({ providedIn: 'root' })
export class RepairOperationService {
async createRepairOperation(
repairedState: AppDataCompleteNew,
repairSummary: RepairSummary,
): Promise<void> {
// 1. Create REPAIR operation with repaired state + summary
// 2. Append to operation log
// 3. Save state cache snapshot
// 4. Show notification to user
}
static createEmptyRepairSummary(): RepairSummary {
return {
entityStateFixed: 0,
orphanedEntitiesRestored: 0,
invalidReferencesRemoved: 0,
relationshipsFixed: 0,
structureRepaired: 0,
typeErrorsFixed: 0,
};
}
}
This section documents known edge cases and areas requiring further design or implementation.
Status: ✅ Implemented (December 2025)
When IndexedDB storage quota is exceeded, the system handles it gracefully:
Implementation (see operation-log.effects.ts):
Error Detection: Catches QuotaExceededError including browser variants:
DOMException with name QuotaExceededErrorNS_ERROR_DOM_QUOTA_REACHEDEmergency Compaction: Triggers emergencyCompact() with shorter retention:
COMPACTION_RETENTION_MS)EMERGENCY_COMPACTION_RETENTION_MS)syncedAt set)Circuit Breaker: Flag isHandlingQuotaExceeded prevents infinite retry loops:
User Notification: On permanent failure (after emergency compaction fails):
Constants (operation-log.const.ts):
EMERGENCY_COMPACTION_RETENTION_MS = 24 * 60 * 60 * 1000 (1 day)MAX_COMPACTION_FAILURES = 3Status: Implemented ✅
The 500-ops compaction trigger uses a persistent counter stored in state_cache.compactionCounter:
Status: ⚠️ Not Fully Defined — Edge Case Risk
Risk Level: MEDIUM — Silent data loss possible in crash/interruption scenarios.
What if data exists in both pf AND SUP_OPS databases?
SUP_OPS.state_cache exists, use it; ignore pf entirelypf after partial migration completedpf has newer data than SUP_OPSProposed solution:
migrationTimestamp in both SUP_OPS.state_cache and pf.META_MODELpf.lastUpdate > SUP_OPS.migrationTimestamp: Warn user, offer merge or re-migratepf older: Proceed with SUP_OPS (current behavior)Mitigation (current): Genesis migration is a one-time event. Once SUP_OPS is established, all writes go there. Risk is limited to the migration moment itself.
Status: Handled via Locks
sp_op_log_compact locksyncedAt set, so unsynced ops from active sync are preservedThe application splits data into "Active State" (in-memory, Redux) and "Archive State" (on-disk, rarely accessed) to maintain performance.
In the legacy system, changing one task in the archive required re-uploading the entire (potentially massive) archive file. This was bandwidth-intensive and slow.
In the Operation Log architecture, we do NOT sync the archive files directly. Instead, we sync the Instructions that modify the archives. Because the logic is deterministic, all clients end up with identical archive files without ever transferring them.
| Component | Sync Strategy | Mechanism |
|---|---|---|
| Active State | Operation Log | Standard sync (Ops applied to Redux) |
| ArchiveYoung | Deterministic Side Effect | moveToArchive ops trigger local moves from Active → Young on all clients |
| ArchiveOld | Deterministic Side Effect | flushYoungToOld ops trigger local flush from Young → Old on all clients |
When a user archives tasks:
moveToArchive operation.ArchiveYoung.moveToArchive operation.ArchiveYoung.Result: Both clients have identical ArchiveYoung files, but zero archive data was transferred over the network.
Planned for future implementation. When ArchiveYoung grows too large, client emits flushYoungToOld operation. All clients execute the same flush logic (move items older than X days), keeping ArchiveOld consistent.
All archive operations MUST be idempotent:
| Operation | Guarantee |
|---|---|
moveToArchive | Skip if task already in archive |
flushYoungToOld | Move only items not already in Old |
restoreFromArchive | Skip if task already in Active |
Edge cases: Missing entities (deleted/out-of-order) → queue for retry or skip. Out-of-order flush → idempotent no-op if Young is empty.
Time tracking data follows a special sync pattern that differs from regular entities.
interface TimeTrackingState {
project: {
[projectId: string]: {
[dateStr: string]: { s?: number; e?: number; b?: number; bt?: number };
};
};
tag: {
[tagId: string]: {
[dateStr: string]: { s?: number; e?: number; b?: number; bt?: number };
};
};
}
// s = start time, e = end time, b = break count, bt = break time
This is a 3-level nested structure: category → contextId → date → data.
Time tracking data exists in three locations:
| Location | Contents | Sync Frequency |
|---|---|---|
| Active State | Today's time tracking only | Every sync (small) |
| archiveYoung | Recent data (< 21 days) | Daily (medium) |
| archiveOld | Historical data (≥ 21 days) | On flush only (rare) |
This split reduces sync payload size significantly.
Daily (finish work):
Active TimeTracking → archiveYoung
(Only today's data stays in active)
Every ~14 days (flush):
archiveYoung → archiveOld
(ALL timeTracking data moves, not threshold-based)
When merging time tracking from multiple sources (e.g., during import):
Priority: current > archiveYoung > archiveOld
Deep Merge at Field Level:
// If current has {s: 100}, archiveYoung has {e: 200}, archiveOld has {b: 5}
// Result: {s: 100, e: 200, b: 5}
This ensures no data loss when fields are partially populated across sources.
Time tracking uses Last-Write-Wins (LWW) for conflicts:
project[id][date], the last operation winsFresh clients receive time tracking via SYNC_IMPORT:
timeTracking + archiveYoung.timeTracking + archiveOld.timeTrackingWithout SYNC_IMPORT: Client replays all individual syncTimeTracking operations incrementally (slower but correct).
| File | Purpose |
|---|---|
merge-time-tracking-states.ts | Three-source merge with priority |
sort-data-to-flush.ts | Archive flush logic (young→old) |
time-tracking.reducer.ts | NgRx reducer for syncTimeTracking |
archive-operation-handler.service.ts | Handles flushYoungToOld remotely |
This section documents the architectural principles ensuring that related model changes happen atomically, preventing state inconsistency during sync.
When a user deletes a tag, multiple entities must be updated:
tagIds updatedIf these changes happen in separate NgRx effects:
Principle: All related entity changes from a single user action should happen in a single reducer pass.
Meta-reducers intercept actions before they reach feature reducers and can modify the entire store state atomically:
// tag-shared.reducer.ts - handles deleteTag atomically
[deleteTag.type]: () => {
// 1. Remove tag references from tasks
// 2. Delete orphaned tasks (no project, no tags, no parent)
// 3. Clean up task repeat configs
// 4. Clean up time tracking state
return updatedState; // All changes in one pass
},
| Meta-Reducer | Purpose |
|---|---|
tagSharedMetaReducer | Tag deletion cleanup (tasks, repeat cfgs, time tracking) |
projectSharedMetaReducer | Project deletion cleanup |
taskSharedCrudMetaReducer | Task CRUD with tag/project updates |
taskSharedLifecycleMetaReducer | Task lifecycle (archive, restore) |
taskSharedSchedulingMetaReducer | Task scheduling with Today tag updates |
plannerSharedMetaReducer | Planner day management |
taskRepeatCfgSharedMetaReducer | Repeat config deletion with task cleanup |
issueProviderSharedMetaReducer | Issue provider updates |
operationCaptureMetaReducer | Captures before/after state, enqueues entity changes |
The OperationCaptureService and operation-capture.meta-reducer work together using a simple FIFO queue to capture actions:
OperationCaptureService.enqueue() with the actionOperationCaptureService.dequeue() to get entity changesentityChanges[] arrayThe FIFO queue works because NgRx reducers process actions sequentially, and effects use concatMap for sequential processing. Order is preserved between enqueue and dequeue.
Note: Most actions return empty entityChanges[] - the action payload is sufficient for replay. Only TIME_TRACKING and TASK time sync actions have special handling to extract entity changes from the action payload.
User Action (e.g., Delete Tag)
│
▼
tagSharedMetaReducer (+ other meta-reducers)
├──► Atomically update all related entities
│
▼
Feature Reducers
│
▼
operation-capture.meta-reducer
├──► Call OperationCaptureService.enqueue(action)
│ └──► Extracts entity changes from action payload (for special cases)
│ └──► Pushes to FIFO queue
│
▼
OperationLogEffects
├──► Call OperationCaptureService.dequeue() to get entity changes
└──► Create single Operation with action payload
| Scenario | Use Meta-Reducer | Use Effect |
|---|---|---|
| Updating related entities in store | ✅ | ❌ |
| Deleting entity with cleanup | ✅ | ❌ |
| UI notifications (snackbar, sound) | ❌ | ✅ |
| External API calls | ❌ | ✅ |
| Archive operations (async I/O) | ❌ | ✅ |
| Navigation/routing | ❌ | ✅ |
Rule of thumb: If it modifies NgRx state, use a meta-reducer. If it's a side effect (I/O, UI, external), use an effect with LOCAL_ACTIONS.
For references between entities (e.g., tag.taskIds), we use a "board-style" pattern where:
task.tagIds)tag.taskIds) is for ordering onlySelectors recompute membership from the source of truth, providing self-healing:
// work-context.selectors.ts
export const computeOrderedTaskIdsForTag = (
tag: Tag,
allTasks: Dictionary<Task>,
): string[] => {
// Use tag.taskIds for order, but filter by actual task.tagIds membership
const validFromTagList = tag.taskIds.filter((id) => {
const task = allTasks[id];
return task && !task.parentId && task.tagIds.includes(tag.id);
});
// Add any tasks that reference this tag but aren't in the list
const missingTasks = Object.values(allTasks).filter(
(task) =>
task &&
!task.parentId &&
task.tagIds.includes(tag.id) &&
!tag.taskIds.includes(task.id),
);
return [...validFromTagList, ...missingTasks.map((t) => t.id)];
};
This ensures stale references are filtered and missing references are auto-added.
When adding new entities or relationships:
ACTION_AFFECTED_ENTITIES in state-change-capture.service.tsLOCAL_ACTIONS in effects for side effects onlySchemaMigration includes both migrateState and optional migrateOperationstate_cache, shared across tabs/restartssyncedAt index - Index on ops store for faster getUnsynced() queriesQuotaExceededError with circuit breaker to prevent infinite loops| Item | Section | Risk if Missing | When Critical |
|---|---|---|---|
| Conflict-aware op migration | A.7.11 | Conflicts may compare mismatched schemas | Before any schema migration that renames/removes fields |
Note: A.7.11 is required for cross-version sync. Currently safe because
CURRENT_SCHEMA_VERSION = 1(all clients on same version). See A.7.11 Interim Guardrails for pre-release checklist.
PfapiStoreDelegateService (reads all NgRx models for sync)hydrateFromRemoteSync() (B.3)OperationSyncCapable)OperationLogSyncService (orchestration, processRemoteOps, detectConflicts)OperationLogUploadService (API upload + file-based fallback, batching)OperationLogDownloadService (API download + file-based fallback, pagination)ConflictResolutionService (LWW auto-resolution + batch apply)VectorClockService (global/entity frontier tracking, compaction recovery)DependencyResolverService (extract/check hard/soft dependencies)OperationApplierService (fail-fast on missing deps → throws SyncStateCorruptedError)rejectedAt field + user notification)MAX_DOWNLOAD_OPS_IN_MEMORY = 50,000)sync-scenarios.integration.spec.ts)supersync.spec.ts with Playwright)OperationEncryptionService for payload encryption/decryptionSYNC_ERROR_CODES) for upload results(user_id, received_at) for cleanup queriesCross-version limitation: Part C is complete for clients on the same schema version. When
CURRENT_SCHEMA_VERSION > 1and clients run different versions, A.7.11 (conflict-aware op migration) is required to ensure correct conflict detection.
| Component | Description | Priority | Notes |
|---|---|---|---|
| Auto-merge | Automatic merge for non-conflicting fields | Low | |
| Undo/Redo | Leverage op-log for undo history | Low | |
| Tombstones | Soft delete with retention window | Medium | Deferred Dec 2025 - current safeguards sufficient (see todo.md for evaluation) |
| A.7.11 | Conflict-aware operation migration | High | Required before CURRENT_SCHEMA_VERSION > 1 for cross-version sync |
Recently Completed (December 2025):
- Server Sync (SuperSync): Full upload/download infrastructure with conflict detection, user resolution UI, and integration tests
- End-to-End Encryption: AES-256-GCM payload encryption with Argon2id key derivation via
OperationEncryptionService- Server Security Hardening: Audit logging, structured error codes, request deduplication, transaction isolation, input validation, rate limiting
- Unified Archive Handling:
ArchiveOperationHandleris now the single source of truth for all archive operations, used by both local effects and remote operation application- Simplified OperationCaptureService: Refactored to FIFO queue with reference equality optimization for detecting changed feature states
- Simplified OperationApplierService: Refactored to fail-fast approach - throws
SyncStateCorruptedErroron missing hard deps (no retry queues)- Tag sanitization: Remove subtask IDs from tags when parent deleted, filter non-existent taskIds on sync
- Anchor-based move operations: All task drag-drop moves now use
afterTaskIdinstead of full list replacement (including subtask moves)- Quota handling: Emergency compaction and circuit breaker on
QuotaExceededErrorsyncedAtindex: FastergetUnsynced()queries- Persistent compaction counter: Tracks ops across tabs/restarts
- Plugin data sync: Operation logging for plugin user data and metadata
- Gap detection: Download operations detect and report sequence gaps
- Server-side conflict detection: Prevents concurrent modifications on server
- Compaction race safety: Safety check to abort deletion if new ops written during snapshot
- Entity validation in meta-reducers: Improved getTag/getProject helpers with validation and safe variants
- Project cleanup in deleteTasks: handleDeleteTasks now cleans up project taskIds/backlogTaskIds
- Archive validation: archiveOld tasks now validated for project/tag references, null-safety added
- Lock service robustness: Handle NaN timestamps and invalid lock formats in fallback lock
- Array payload rejection: Explicit check to reject arrays (which bypass
typeof === 'object')- Pending operation expiry: Operations pending >24h are rejected instead of replayed (PENDING_OPERATION_EXPIRY_MS)
src/app/op-log/
├── operation.types.ts # Type definitions (Operation, OpType, EntityType)
├── operation-log.const.ts # Constants (thresholds, timeouts, limits)
├── operation-log.effects.ts # Action capture + META_MODEL bridge
├── operation-converter.util.ts # Op ↔ Action conversion
├── persistent-action.interface.ts # PersistentAction type + isPersistentAction guard
├── entity-key.util.ts # Entity key generation utilities
├── store/
│ ├── operation-log-store.service.ts # SUP_OPS IndexedDB wrapper
│ ├── operation-log-hydrator.service.ts # Startup hydration + crash recovery
│ ├── operation-log-compaction.service.ts # Snapshot + cleanup + emergency mode
│ ├── operation-log-manifest.service.ts # File-based sync manifest management
│ ├── operation-log-migration.service.ts # Genesis migration from legacy
│ └── schema-migration.service.ts # State schema migrations
├── sync/
│ ├── operation-log-sync.service.ts # Orchestration (Part C)
│ ├── operation-log-download.service.ts # Download ops (API + file fallback)
│ ├── operation-log-upload.service.ts # Upload ops (API + file fallback)
│ ├── operation-encryption.service.ts # E2EE payload encryption (AES-256-GCM)
│ ├── vector-clock.service.ts # Global/entity frontier tracking
│ ├── lock.service.ts # Cross-tab locking (Web Locks + fallback)
│ ├── conflict-resolution.service.ts # LWW conflict resolution + user notification
│ ├── sync-import-filter.service.ts # Filter ops invalidated by SYNC_IMPORT
│ ├── immediate-upload.service.ts # Trigger immediate sync on critical ops
│ ├── super-sync-status.service.ts # SuperSync connection status tracking
│ ├── server-migration.service.ts # Server-side schema migration handling
│ ├── operation-write-flush.service.ts # Batch write operations with flush
│ └── operation-sync.util.ts # Sync helper utilities
├── processing/
│ ├── operation-applier.service.ts # Apply ops with fail-fast dependency handling
│ ├── operation-capture.service.ts # FIFO queue for capturing entity changes
│ ├── operation-capture.meta-reducer.ts # Meta-reducer for before/after state capture
│ ├── hydration-state.service.ts # Track hydration/remote ops application state
│ ├── archive-operation-handler.service.ts # Unified handler for archive side effects
│ ├── archive-operation-handler.effects.ts # Routes local actions to ArchiveOperationHandler
│ ├── validate-state.service.ts # Typia + cross-model validation
│ ├── validate-operation-payload.ts # Checkpoint A - payload validation
│ └── repair-operation.service.ts # REPAIR operation creation
├── integration/ # Integration test suite
│ ├── sync-scenarios.integration.spec.ts # Protocol-level sync tests
│ ├── multi-client-sync.integration.spec.ts # Multi-client scenarios
│ ├── state-consistency.integration.spec.ts # State validation tests
│ └── helpers/ # Test utilities
│ ├── mock-sync-server.helper.ts # Server mock for tests
│ ├── simulated-client.helper.ts # Client simulation
│ ├── test-client.helper.ts # Test client utilities
│ └── operation-factory.helper.ts # Test operation builders
└── benchmarks/
└── operation-log-stress.spec.ts # Performance stress tests
src/app/features/work-context/store/
├── work-context-meta.actions.ts # Move actions (moveTaskInTodayList, etc.)
└── work-context-meta.helper.ts # Anchor-based positioning helpers
src/app/op-log/sync-providers/
├── super-sync/ # SuperSync server provider
│ ├── super-sync.ts # Server-based sync implementation
│ └── super-sync.model.ts # SuperSync types
├── file-based/ # File-based providers (Part B)
│ ├── file-based-sync-adapter.service.ts # Unified adapter for file providers
│ ├── file-based-sync.types.ts # FileBasedSyncData types
│ ├── webdav/ # WebDAV provider
│ ├── dropbox/ # Dropbox provider
│ └── local-file/ # Local file sync provider
├── provider-manager.service.ts # Provider activation/management
├── wrapped-provider.service.ts # Provider wrapper with encryption
└── credential-store.service.ts # OAuth/credential storage
e2e/
├── tests/sync/supersync.spec.ts # E2E SuperSync tests (Playwright)
├── pages/supersync.page.ts # Page object for sync tests
└── utils/supersync-helpers.ts # E2E test utilities