docs/public/architecture-evolution.mdx
Goal: Create a memory system that makes Claude smarter across sessions without the user noticing it exists.
Challenge: How do you observe AI agent behavior, compress it intelligently, and serve it back at the right time - all without slowing down or interfering with the main workflow?
This is the story of how claude-mem evolved from a simple idea to a production-ready system, and the key architectural decisions that made it work.
After establishing the solid v4 architecture, v5.x focused on user experience, visualization, and polish.
What Changed: Added light/dark mode theme toggle to viewer UI
New Features:
Implementation:
// Theme context with persistence
const ThemeProvider = ({ children }) => {
const [theme, setTheme] = useState<'light' | 'dark' | 'system'>(() => {
return localStorage.getItem('claude-mem-theme') || 'system';
});
useEffect(() => {
localStorage.setItem('claude-mem-theme', theme);
}, [theme]);
return (
<ThemeContext.Provider value={{ theme, setTheme }}>
{children}
</ThemeContext.Provider>
);
};
Why It Matters: Users working in different lighting conditions can now customize the viewer for comfort.
Note: This section describes a historical PM2-based approach that has been replaced with Bun in later versions.
The Problem: Worker startup failed on Windows with ENOENT error when using PM2
Historical Solution: Used full path to PM2 binary instead of relying on PATH
Current Approach: The project now uses Bun for process management, which provides better cross-platform compatibility and eliminates these PATH-related issues.
Impact: Cross-platform compatibility restored, Windows users can now use claude-mem without issues.
The Breakthrough: Real-time visualization of memory stream
What We Built:
New Worker Endpoints (8 additions):
GET / # Serves viewer HTML
GET /stream # SSE real-time updates
GET /api/prompts # Paginated user prompts
GET /api/observations # Paginated observations
GET /api/summaries # Paginated session summaries
GET /api/stats # Database statistics
GET /api/settings # User settings
POST /api/settings # Save settings
Database Enhancements:
// New SessionStore methods for viewer
getRecentPrompts(limit, offset, project?)
getRecentObservations(limit, offset, project?)
getRecentSummaries(limit, offset, project?)
getStats()
getUniqueProjects()
React Architecture:
src/ui/viewer/
├── components/
│ ├── Header.tsx # Navigation + stats
│ ├── Sidebar.tsx # Project filter
│ ├── Feed.tsx # Infinite scroll
│ └── cards/
│ ├── ObservationCard.tsx
│ ├── PromptCard.tsx
│ ├── SummaryCard.tsx
│ └── SkeletonCard.tsx
├── hooks/
│ ├── useSSE.ts # Real-time events
│ ├── usePagination.ts # Infinite scroll
│ ├── useSettings.ts # Persistence
│ └── useStats.ts # Statistics
└── utils/
├── merge.ts # Data deduplication
└── format.ts # Display formatting
Build Process:
// esbuild bundles everything into single HTML file
esbuild.build({
entryPoints: ['src/ui/viewer/index.tsx'],
bundle: true,
outfile: 'plugin/ui/viewer.html',
loader: { '.tsx': 'tsx', '.woff2': 'dataurl' },
define: { 'process.env.NODE_ENV': '"production"' },
});
Why It Matters: Users can now see exactly what's being captured in real-time, making the memory system transparent and debuggable.
The Problem: npm install ran on every SessionStart (2-5 seconds)
The Insight: Dependencies rarely change between sessions
The Solution: Version-based caching
// Check version marker before installing
const currentVersion = getPackageVersion();
const installedVersion = readFileSync('.install-version', 'utf-8');
if (currentVersion !== installedVersion) {
// Only install if version changed
await runNpmInstall();
writeFileSync('.install-version', currentVersion);
}
Cached Check Logic:
node_modules exist?.install-version match package.json version?better-sqlite3 present? (Legacy: now uses bun:sqlite which requires no installation)Impact:
What Changed: More robust worker startup and monitoring
New Features:
// Health check endpoint
app.get('/health', (req, res) => {
res.json({
status: 'ok',
uptime: process.uptime(),
port: WORKER_PORT,
memory: process.memoryUsage(),
});
});
// Smart worker startup
async function ensureWorkerHealthy() {
const healthy = await isWorkerHealthy(1000);
if (!healthy) {
await startWorker();
await waitForWorkerHealth(10000);
}
}
Benefits:
What Changed: Various bug fixes and stability enhancements
Key Fixes:
The Evolution: SQLite FTS5 + Chroma vector search
What We Added:
┌─────────────────────────────────────────────────────────┐
│ HYBRID SEARCH │
│ │
│ Text Query → SQLite FTS5 (keyword matching) │
│ ↓ │
│ Chroma Vector Search (semantic) │
│ ↓ │
│ Merge + Re-rank Results │
└─────────────────────────────────────────────────────────┘
New Dependencies:
chromadb - Vector database for semantic searchMCP Tools Enhancement:
// Chroma-backed semantic search
search_observations({
query: "authentication bug",
useSemanticSearch: true // Uses Chroma
});
// Falls back to FTS5 if Chroma unavailable
Why Hybrid:
Trade-offs:
Before:
9+ MCP tools registered at session start:
- search_observations
- find_by_type
- find_by_file
- find_by_concept
- get_recent_context
- get_observation
- get_session
- get_prompt
- help
Problems:
- Overlapping operations (search_observations vs find_by_type)
- Complex parameter schemas (~2,500 tokens in tool definitions)
- No built-in workflow guidance
- High cognitive load for Claude (which tool to use?)
- Code size: ~2,718 lines in mcp-server.ts
The Insight: Progressive disclosure should be built into tool design itself, not something Claude has to remember.
After:
4 MCP tools following 3-layer workflow:
1. __IMPORTANT - Workflow documentation (always visible)
"3-LAYER WORKFLOW (ALWAYS FOLLOW):
1. search(query) → Get index with IDs
2. timeline(anchor=ID) → Get context
3. get_observations([IDs]) → Fetch details
NEVER fetch full details without filtering first."
2. search - Layer 1: Get index with IDs (~50-100 tokens/result)
3. timeline - Layer 2: Get chronological context
4. get_observations - Layer 3: Fetch full details (~500-1,000 tokens/result)
Benefits:
- Progressive disclosure enforced by tool structure
- No overlapping operations
- Simple schemas (additionalProperties: true)
- Clear workflow pattern
- Code size: ~312 lines in mcp-server.ts (88% reduction)
- ~10x token savings
Previously: Used skill-based search
Now: Removed skill-based approach
MCP Server Refactor:
Before:
// Complex parameter schemas
{
name: "search_observations",
inputSchema: {
type: "object",
properties: {
query: { type: "string", description: "..." },
type: { type: "array", items: { enum: [...] } },
format: { enum: ["index", "full"] },
limit: { type: "number", minimum: 1, maximum: 100 },
// ... many more parameters
}
}
}
After:
// Simple schemas with workflow guidance
{
name: "search",
description: "Step 1: Search memory. Returns index with IDs.",
inputSchema: {
type: "object",
properties: {},
additionalProperties: true // Accept any parameters
}
}
Workflow Enforcement:
Before: Claude had to remember progressive disclosure pattern
After: Tool structure makes it impossible to skip steps
Token Efficiency:
Traditional: Fetch 20 observations upfront
→ 10,000-20,000 tokens
→ Only 2 observations relevant (90% waste)
3-Layer Workflow:
→ search (20 results): ~1,000-2,000 tokens
→ Review index, identify 3 relevant IDs
→ get_observations (3 IDs): ~1,500-3,000 tokens
→ Total: 2,500-5,000 tokens (50-75% savings)
Code Simplicity:
User Experience:
Progressive Disclosure Through Structure:
The 3-layer workflow embodies progressive disclosure at the architectural level:
Each layer provides a decision point where Claude can:
This makes it structurally difficult to waste tokens.
Architecture:
PostToolUse Hook → Save raw tool outputs → Retrieve everything on startup
What we learned:
Example of what went wrong:
SessionStart loaded:
- 150 file read operations
- 80 grep searches
- 45 bash commands
- Total: ~35,000 tokens
- Relevant to current task: ~500 tokens (1.4%)
New idea: Use Claude itself to compress observations
Architecture:
PostToolUse Hook → Queue observation → SDK Worker → AI compression → Store insights
What we added:
What worked:
What didn't work:
Problem: Even compressed observations can pollute context if you load them all.
Insight: Humans don't read everything before starting work. Why should AI?
Solution: Show an index first, fetch details on-demand.
❌ Old: Load 50 observations (8,500 tokens)
✅ New: Show index of 50 observations (800 tokens)
Agent fetches 2-3 relevant ones (300 tokens)
Total: 1,100 tokens vs 8,500 tokens
Impact:
Problem: SDK session IDs change on every turn.
What we thought:
// ❌ Wrong assumption
UserPromptSubmit → Capture session ID once → Use forever
Reality:
// ✅ Actual behavior
Turn 1: session_abc123
Turn 2: session_def456
Turn 3: session_ghi789
Why this matters:
Solution:
// Capture from system init message
for await (const msg of response) {
if (msg.type === 'system' && msg.subtype === 'init') {
sdkSessionId = msg.session_id;
await updateSessionId(sessionId, sdkSessionId);
}
}
v3 approach:
// ❌ Aggressive: Kill worker immediately
SessionEnd → DELETE /worker/session → Worker stops
Problems:
v4 approach:
// ✅ Graceful: Let worker finish
SessionEnd → Mark session complete → Worker finishes → Exit naturally
Benefits:
Code:
// v3: Aggressive
async function sessionEnd(sessionId: string) {
await fetch(`http://localhost:37777/sessions/${sessionId}`, {
method: 'DELETE'
});
}
// v4: Graceful
async function sessionEnd(sessionId: string) {
await db.run(
'UPDATE sdk_sessions SET completed_at = ? WHERE id = ?',
[Date.now(), sessionId]
);
}
Problem: We were creating multiple SDK sessions per Claude Code session.
What we thought:
Claude Code session → Create SDK session per observation → 100+ SDK sessions
Reality should be:
Claude Code session → ONE long-running SDK session → Streaming input
Why this matters:
Implementation:
// ✅ Streaming Input Mode
async function* messageGenerator(): AsyncIterable<UserMessage> {
// Initial prompt
yield {
role: "user",
content: "You are a memory assistant..."
};
// Then continuously yield observations
while (session.status === 'active') {
const observations = await pollQueue();
for (const obs of observations) {
yield {
role: "user",
content: formatObservation(obs)
};
}
await sleep(1000);
}
}
const response = query({
prompt: messageGenerator(),
options: { maxTurns: 1000 }
});
┌─────────────────────────────────────────────────────────┐
│ CLAUDE CODE SESSION │
│ User → Claude → Tools (Read, Edit, Write, Bash) │
│ ↓ │
│ PostToolUse Hook │
│ (queues observation) │
└─────────────────────────────────────────────────────────┘
↓ SQLite queue
┌─────────────────────────────────────────────────────────┐
│ SDK WORKER PROCESS │
│ ONE streaming session per Claude Code session │
│ │
│ AsyncIterable<UserMessage> │
│ → Yields observations from queue │
│ → SDK compresses via AI │
│ → Parses XML responses │
│ → Stores in database │
└─────────────────────────────────────────────────────────┘
↓ SQLite storage
┌─────────────────────────────────────────────────────────┐
│ NEXT SESSION │
│ SessionStart Hook │
│ → Queries database │
│ → Returns progressive disclosure index │
│ → Agent fetches details via MCP │
└─────────────────────────────────────────────────────────┘
**Timing:** When Claude Code starts
**What it does:**
- Queries last 10 session summaries
- Formats as progressive disclosure index
- Injects into context via stdout
**Key change from v3:**
- ✅ Index format (not full details)
- ✅ Token counts visible
- ✅ MCP search instructions included
**Timing:** Before Claude processes prompt
**What it does:**
- Creates session record
- Saves raw user prompt (v4.2.0+)
- Starts worker if needed
**Key change from v3:**
- ✅ Stores raw prompts for search
- ✅ Auto-starts worker service
**Timing:** After every tool execution
**What it does:**
- Enqueues observation in database
- Returns immediately
**Key change from v3:**
- ✅ Just enqueues (doesn't process)
- ✅ Worker handles all AI calls
**Timing:** Worker-triggered (mid-session)
**What it does:**
- Gathers observations
- Sends to Claude for summarization
- Stores structured summary
**Key change from v3:**
- ✅ Multiple summaries per session
- ✅ Summaries are checkpoints, not endings
**Timing:** When session ends
**What it does:**
- Marks session complete
- Lets worker finish processing
**Key change from v3:**
- ✅ Graceful (not aggressive)
- ✅ No DELETE requests
- ✅ Worker finishes naturally
v3 schema:
-- Simple, flat structure
CREATE TABLE observations (
id INTEGER PRIMARY KEY,
session_id TEXT,
text TEXT,
created_at INTEGER
);
v4 schema:
-- Rich, structured schema
CREATE TABLE observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
session_id TEXT NOT NULL,
project TEXT NOT NULL,
-- Progressive disclosure metadata
title TEXT NOT NULL,
subtitle TEXT,
type TEXT NOT NULL, -- decision, bugfix, feature, etc.
-- Content
narrative TEXT NOT NULL,
facts TEXT, -- JSON array
-- Searchability
concepts TEXT, -- JSON array of tags
files_read TEXT, -- JSON array
files_modified TEXT, -- JSON array
-- Timestamps
created_at TEXT NOT NULL,
created_at_epoch INTEGER NOT NULL,
FOREIGN KEY(session_id) REFERENCES sdk_sessions(id)
);
-- FTS5 for full-text search
CREATE VIRTUAL TABLE observations_fts USING fts5(
title, subtitle, narrative, facts, concepts,
content=observations
);
-- Auto-sync triggers
CREATE TRIGGER observations_ai AFTER INSERT ON observations BEGIN
INSERT INTO observations_fts(rowid, title, subtitle, narrative, facts, concepts)
VALUES (new.id, new.title, new.subtitle, new.narrative, new.facts, new.concepts);
END;
What changed:
v3 worker:
// Multiple short SDK sessions
app.post('/process', async (req, res) => {
const response = await query({
prompt: buildPrompt(req.body),
options: { maxTurns: 1 }
});
for await (const msg of response) {
// Process single observation
}
res.json({ success: true });
});
v4 worker:
// ONE long-running SDK session
async function runWorker(sessionId: string) {
const response = query({
prompt: messageGenerator(), // AsyncIterable
options: { maxTurns: 1000 }
});
for await (const msg of response) {
if (msg.type === 'text') {
parseObservations(msg.content);
parseSummaries(msg.content);
}
}
}
Benefits:
Problem: SessionStart hook output polluted with npm install logs
# Hook output contained:
npm WARN deprecated ...
npm WARN deprecated ...
{"hookSpecificOutput": {"additionalContext": "..."}}
Why it broke:
Solution:
{
"command": "npm install --loglevel=silent && node context-hook.js"
}
Result: Clean JSON output, context injection works
Problem: Hook executables had duplicate shebangs
#!/usr/bin/env node
#!/usr/bin/env node // ← Duplicate!
// Rest of code...
Why it happened:
Solution:
// Remove shebangs from source files
// Let esbuild add them during build
Result: Clean executables, no parsing errors
Problem: User input passed directly to FTS5 query
// ❌ Vulnerable
const results = db.query(
`SELECT * FROM observations_fts WHERE observations_fts MATCH '${userQuery}'`
);
Attack:
userQuery = "'; DROP TABLE observations; --"
Solution:
// ✅ Safe: Use parameterized queries
const results = db.query(
'SELECT * FROM observations_fts WHERE observations_fts MATCH ?',
[userQuery]
);
Problem: Session creation failed when prompt was empty
INSERT INTO sdk_sessions (claude_session_id, user_prompt, ...)
VALUES ('abc123', NULL, ...) -- ❌ user_prompt is NOT NULL
Solution:
// Allow NULL user_prompts
user_prompt: input.prompt ?? null
Schema change:
-- Before
user_prompt TEXT NOT NULL
-- After
user_prompt TEXT -- Nullable
Before:
for (const obs of observations) {
db.run(`INSERT INTO observations (...) VALUES (?, ?, ...)`, [obs.id, obs.text, ...]);
}
After:
const stmt = db.prepare(`INSERT INTO observations (...) VALUES (?, ?, ...)`);
for (const obs of observations) {
stmt.run([obs.id, obs.text, ...]);
}
stmt.finalize();
Impact: 5x faster bulk inserts
Before:
// Manual full-text search
const results = db.query(
`SELECT * FROM observations WHERE text LIKE '%${query}%'`
);
After:
// FTS5 virtual table
const results = db.query(
`SELECT * FROM observations_fts WHERE observations_fts MATCH ?`,
[query]
);
Impact: 100x faster searches on large datasets
Before:
// Always return full observations
search_observations({ query: "hooks" });
// Returns: 5,000 tokens
After:
// Default to index format
search_observations({ query: "hooks", format: "index" });
// Returns: 200 tokens
// Fetch full only when needed
search_observations({ query: "hooks", format: "full", limit: 1 });
// Returns: 150 tokens
Impact: 25x reduction in average search result size
Principle: Every token you put in context window costs attention.
Application:
Principle: Distributed state is hard. SDK handles it better than we can.
Application:
Principle: Let processes finish their work before terminating.
Application:
Principle: Don't compress manually. Let AI do semantic compression.
Application:
Principle: Show metadata first, fetch details on-demand.
Application:
SessionStart({ source: "startup" }):
→ Show last 10 sessions (normal)
SessionStart({ source: "resume" }):
→ Show only current session (minimal)
SessionStart({ source: "compact" }):
→ Show last 20 sessions (comprehensive)
// Use embeddings to pre-sort index by semantic relevance
search_observations({
query: "authentication bug",
sort: "relevance" // Based on embeddings
});
// Cross-project pattern recognition
search_observations({
query: "API rate limiting",
projects: ["api-gateway", "user-service", "billing-service"]
});
// Team-shared observations (optional)
createObservation({
title: "Rate limit: 100 req/min",
scope: "team" // vs "user"
});
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem-v3-backup.db
cd ~/.claude/plugins/marketplaces/thedotmack
git pull
/plugin update claude-mem
What happens automatically:
# Start Claude Code
claude
# Check that context is injected
# (Should see progressive disclosure index with v5 viewer link)
# Open viewer UI (v5.1.0+)
open http://localhost:37777
# Submit a prompt and watch real-time updates in viewer
# View memory stream in browser (v5.1.0+)
open http://localhost:37777
# Toggle theme (v5.1.2+)
# Click theme button in viewer header
# Check worker health
npm run worker:status
curl http://localhost:37777/health
| Metric | Value |
|---|---|
| Context usage per session | ~25,000 tokens |
| Relevant context | ~2,000 tokens (8%) |
| Hook execution time | ~200ms |
| Search latency | ~500ms (LIKE queries) |
| Metric | Value |
|---|---|
| Context usage per session | ~1,100 tokens |
| Relevant context | ~1,100 tokens (100%) |
| Hook execution time | ~45ms |
| Search latency | ~15ms (FTS5) |
| Metric | Value |
|---|---|
| Context usage per session | ~1,100 tokens |
| Relevant context | ~1,100 tokens (100%) |
| Hook execution time | ~10ms (cached install) |
| Search latency | ~12ms (FTS5) or ~25ms (hybrid) |
| Viewer UI load time | ~50ms (bundled HTML) |
| SSE update latency | ~5ms (real-time) |
v3 → v4 Improvements:
v4 → v5 Improvements:
The journey from v3 to v5 was about understanding these fundamental truths:
The result is a memory system that's both powerful and invisible. Users never notice it working - Claude just gets smarter over time.
v5 adds visibility: Now users CAN see the memory system working if they want (via viewer UI), but it's still non-intrusive.
This architecture evolution reflects hundreds of hours of experimentation, dozens of dead ends, and the invaluable experience of real-world usage. v5 is the architecture that emerged from understanding what actually works - and making it visible to users.