docs/memory/README.md
This directory contains memory and learning data for the SuperClaude Framework's PM Agent.
The PM Agent uses multiple memory systems to learn, improve, and maintain context across sessions:
Purpose: Error learning database
Format: JSON Lines
Generated by: ReflexionMemory system (superclaude/core/pm_init/reflexion_memory.py)
Stores past errors, root causes, and solutions for instant error resolution.
Example entry:
{
"ts": "2025-10-30T14:23:45+09:00",
"task": "implement JWT authentication",
"mistake": "JWT validation failed",
"evidence": "TypeError: secret undefined",
"rule": "Check env vars before auth implementation",
"fix": "Added JWT_SECRET to .env",
"tests": ["Verify .env vars", "Test JWT signing"],
"status": "adopted"
}
User Guide: See docs/user-guide/memory-system.md
Purpose: Sample reflexion entries for reference Status: Template file (15 realistic examples)
Copy this to reflexion.jsonl if you want to start with example data, or let the system create it automatically on first error.
Purpose: Task performance tracking Format: JSON Lines Generated by: PM Agent workflow system
Tracks token usage, execution time, and success rates for continuous optimization.
Example entry:
{
"timestamp": "2025-10-17T01:54:21+09:00",
"session_id": "abc123",
"task_type": "bug_fix",
"complexity": "light",
"workflow_id": "progressive_v3_layer2",
"layers_used": [0, 1, 2],
"tokens_used": 650,
"time_ms": 1800,
"success": true
}
Schema: See WORKFLOW_METRICS_SCHEMA.md
Purpose: Successful implementation patterns Format: JSON Lines Generated by: PM Agent learning system
Captures reusable patterns from successful implementations.
Complete schema definition for workflow metrics data, including field types, descriptions, and examples.
Documentation of the PM Agent's context management system, including progressive loading strategy and token efficiency.
Validation results and benchmarks for token efficiency optimizations.
Session notes and context from previous work sessions.
Planned improvements and next steps for the memory system.
These files are automatically created and managed by the system:
reflexion.jsonl - Created on first errorworkflow_metrics.jsonl - Created on first taskpatterns_learned.jsonl - Created when patterns are learnedDon't manually create these files - the system handles it.
If reflexion.jsonl doesn't exist:
Backup:
# Archive old learnings
tar -czf memory-backup-$(date +%Y%m%d).tar.gz docs/memory/*.jsonl
Clean old entries (if files grow too large):
# Keep last 100 entries
tail -100 docs/memory/reflexion.jsonl > reflexion.tmp
mv reflexion.tmp docs/memory/reflexion.jsonl
Validate JSON format:
# Check all lines are valid JSON
cat docs/memory/reflexion.jsonl | while read line; do
echo "$line" | jq . >/dev/null || echo "Invalid: $line"
done
✅ Should be committed:
reflexion.jsonl.example (template)patterns_learned.jsonl (shared patterns)❓ Optional to commit:
reflexion.jsonl (team-specific learnings)workflow_metrics.jsonl (performance data)Recommendation: Add reflexion.jsonl to .gitignore if learnings are developer-specific.
If you want personal memory (not shared with team):
# Add to .gitignore
echo "docs/memory/reflexion.jsonl" >> .gitignore
echo "docs/memory/workflow_metrics.jsonl" >> .gitignore
If you want shared team memory (everyone benefits):
# Keep files in git (current default)
# All team members learn from each other's mistakes
ReflexionMemory stores:
It does NOT store:
If an error message contains sensitive info:
reflexion.jsonlExample:
// Before (contains secret)
{"evidence": "Auth failed with key abc123xyz"}
// After (redacted)
{"evidence": "Auth failed with invalid API key"}
Expected file sizes:
reflexion.jsonl: 1-10 KB per 10 entries (~1MB per 1000 errors)workflow_metrics.jsonl: 0.5-1 KB per entrypatterns_learned.jsonl: 2-5 KB per patternReflexionMemory search is fast:
No performance concerns until 10,000+ entries.
If you get EACCES errors:
chmod 644 docs/memory/*.jsonl
If entries are malformed:
# Find and remove invalid lines
cat reflexion.jsonl | while read line; do
echo "$line" | jq . >/dev/null 2>&1 && echo "$line"
done > fixed.jsonl
mv fixed.jsonl reflexion.jsonl
If you see duplicate learnings:
# Show duplicates
cat reflexion.jsonl | jq -r '.mistake' | sort | uniq -c | sort -rn
# Remove duplicates (keeps first occurrence)
cat reflexion.jsonl | jq -s 'unique_by(.mistake)' | jq -c '.[]' > deduplicated.jsonl
mv deduplicated.jsonl reflexion.jsonl
superclaude/core/pm_init/reflexion_memory.py# View all learnings
cat docs/memory/reflexion.jsonl | jq
# Count entries
wc -l docs/memory/reflexion.jsonl
# Search for specific topic
grep -i "auth" docs/memory/reflexion.jsonl | jq
# Latest 5 learnings
tail -5 docs/memory/reflexion.jsonl | jq
# Most common mistakes
cat docs/memory/reflexion.jsonl | jq -r '.mistake' | sort | uniq -c | sort -rn | head -10
# Export to readable format
cat docs/memory/reflexion.jsonl | jq > reflexion-readable.json
Last Updated: 2025-10-30 Maintained by: SuperClaude Framework Team