docs/sync-and-op-log/operation-payload-optimization-discussion.md
Date: December 5, 2025 Context: Analysis of operation payload sizes and optimization opportunities
We analyzed the codebase for occasions when many or very large operations are produced.
| Issue | Severity | Impact |
|---|---|---|
| Tag deletion cascade | High | Creates N+1 operations for N tasks |
| Full payload storage | High | Large payloads stored repeatedly |
| batchUpdateForProject nesting | Medium | Single op contains nested array |
| Archive operations | Medium | One bulk op for many tasks |
| Single operations per bulk entity | Medium | N operations instead of 1 |
LARGE_PAYLOAD_WARNING_THRESHOLD_BYTES (10KB) and logging when exceededbatchUpdateForProject now chunks large operations into batches of MAX_BATCH_OPERATIONS_SIZE (50)The moveToArchive action was identified as having large payloads (~2KB per task). We explored multiple optimization approaches.
Two sync systems exist:
When Client A archives tasks:
archiveYoung model file syncs later (daily)When Client B receives the operation:
The operation must carry full task data.
moveToArchive: {
taskIds: string[], // Persisted
_tasks: TaskWithSubTasks[] // Stripped before storage
}
Problem: Remote operations won't have _tasks - still need full data for sync.
Capture tasks from state before deletion, attach to action for effect.
Why it seemed possible:
addTask ops applied before moveToArchiveProblems:
Split into writeToArchive (full data) + deleteTasks (IDs only).
Problem: Same total payload size. Just added complexity without benefit.
Archive becomes a separate IndexedDB store populated entirely by operations:
archiveTask: { taskIds: string[] } // IDs only
Meta-reducer moves task data from active state to archive before deletion.
Benefits:
Drawbacks:
"There can be more than 20,000 archived tasks"
If archive was in NgRx store:
This ruled out simple "add isArchived flag" approaches.
Operations have causal ordering. When remote client receives moveToArchive:
addTask operations already applied (dependency)But the effect runs AFTER the reducer deletes entities. The timing makes this approach impractical without complex meta-reducer side effects.
Keep the current full-payload approach.
For very large archives, chunk operations:
const ARCHIVE_CHUNK_SIZE = 25;
async moveToArchive(tasks: TaskWithSubTasks[]): Promise<void> {
const chunks = chunkArray(parentTasks, ARCHIVE_CHUNK_SIZE);
for (const chunk of chunks) {
this._store.dispatch(TaskSharedActions.moveToArchive({ tasks: chunk }));
}
await this._archiveService.moveTasksToArchiveAndFlushArchiveIfDue(parentTasks);
}
| Approach | Payload Size | Complexity | Reliability |
|---|---|---|---|
| Full payload (current) | Large (~2KB/task) | Low | High |
| Meta-reducer enrichment | Small | High | Medium |
| Two-phase archive | Same as current | Higher | High |
| Operation-derived archive | Small | Very High | Medium |
The payload size reduction doesn't justify the added complexity.
archive-operation-redesign.md - Detailed analysis of archive optionscode-audit.md - Overall operation compliance auditoperation-size-analysis.md - Initial payload size analysisIf payload size becomes a real problem (not theoretical), revisit Option D (operation-derived archive) with:
Alternative Optimization:
_tasks payload within the moveToArchive operation (e.g., using LZ-string or GZIP) before sending. This would solve the size concern without requiring the architectural overhaul of Option D.Until then, current approach is the right balance.