docs/Development/pm-agent-integration.md
Last Updated: 2025-10-14 Target Version: 4.3.0 Status: Implementation Guide
This guide provides step-by-step procedures for integrating PM Agent mode as SuperClaude's always-active meta-layer with session lifecycle management, PDCA self-evaluation, and systematic knowledge management.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ PM Agent Mode (Meta-Layer) โ
โ โข Always Active โ
โ โข Session Management โ
โ โข PDCA Self-Evaluation โ
โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
[Specialist Agents Layer]
โ
[Commands & Modes Layer]
โ
[MCP Tool Layer]
See: ARCHITECTURE.md for full system architecture
superclaude/
โโโ Commands/
โ โโโ pm.md # โ
Already updated
โโโ Agents/
โ โโโ pm-agent.md # โ
Already updated
โโโ Core/
โโโ __init__.py # Module initialization
โโโ session_lifecycle.py # ๐ Session management
โโโ pdca_engine.py # ๐ PDCA automation
โโโ memory_ops.py # ๐ Memory operations
memory_ops.py - Serena MCP wrapper (foundation)session_lifecycle.py - Session management (depends on memory_ops)pdca_engine.py - PDCA automation (depends on memory_ops)Wrapper for Serena MCP memory operations with error handling and fallback.
# superclaude/Core/memory_ops.py
class MemoryOperations:
"""Serena MCP memory operations wrapper"""
def list_memories() -> List[str]:
"""List all available memories"""
def read_memory(key: str) -> Optional[Dict]:
"""Read memory by key"""
def write_memory(key: str, value: Dict) -> bool:
"""Write memory with key"""
def delete_memory(key: str) -> bool:
"""Delete memory by key"""
pytest tests/test_memory_ops.py -v
Auto-activation at session start, context restoration, user report generation.
# superclaude/Core/session_lifecycle.py
class SessionLifecycle:
"""Session lifecycle management"""
def on_session_start():
"""Hook for session start (auto-activation)"""
# 1. list_memories()
# 2. read_memory("pm_context")
# 3. read_memory("last_session")
# 4. read_memory("next_actions")
# 5. generate_user_report()
def generate_user_report() -> str:
"""Generate user report (ๅๅ/้ฒๆ/ไปๅ/่ชฒ้ก)"""
def on_session_end():
"""Hook for session end (checkpoint save)"""
# 1. write_memory("last_session", summary)
# 2. write_memory("next_actions", todos)
# 3. write_memory("pm_context", complete_state)
ๅๅ: [last session summary]
้ฒๆ: [current progress status]
ไปๅ: [planned next actions]
่ชฒ้ก: [blockers or issues]
pytest tests/test_session_lifecycle.py -v
Automate PDCA cycle execution with documentation generation.
# superclaude/Core/pdca_engine.py
class PDCAEngine:
"""PDCA cycle automation"""
def plan_phase(goal: str):
"""Generate hypothesis (ไปฎ่ชฌ)"""
# 1. write_memory("plan", goal)
# 2. Create docs/temp/hypothesis-YYYY-MM-DD.md
def do_phase():
"""Track experimentation (ๅฎ้จ)"""
# 1. TodoWrite tracking
# 2. write_memory("checkpoint", progress) every 30min
# 3. Update docs/temp/experiment-YYYY-MM-DD.md
def check_phase():
"""Self-evaluation (่ฉไพก)"""
# 1. think_about_task_adherence()
# 2. think_about_whether_you_are_done()
# 3. Create docs/temp/lessons-YYYY-MM-DD.md
def act_phase():
"""Knowledge extraction (ๆนๅ)"""
# 1. Success โ docs/patterns/[pattern-name].md
# 2. Failure โ docs/mistakes/mistake-YYYY-MM-DD.md
# 3. Update CLAUDE.md if global pattern
hypothesis-template.md:
# Hypothesis: [Goal Description]
Date: YYYY-MM-DD
Status: Planning
## Goal
What are we trying to accomplish?
## Approach
How will we implement this?
## Success Criteria
How do we know when we're done?
## Potential Risks
What could go wrong?
experiment-template.md:
# Experiment Log: [Implementation Name]
Date: YYYY-MM-DD
Status: In Progress
## Implementation Steps
- [ ] Step 1
- [ ] Step 2
## Errors Encountered
- Error 1: Description, solution
## Solutions Applied
- Solution 1: Description, result
## Checkpoint Saves
- 10:00: [progress snapshot]
- 10:30: [progress snapshot]
pytest tests/test_pdca_engine.py -v
# Install Serena MCP server
# See: docs/troubleshooting/serena-installation.md
// ~/.claude/.claude.json
{
"mcpServers": {
"serena": {
"command": "uv",
"args": ["run", "serena-mcp"]
}
}
}
{
"pm_context": {
"project": "SuperClaude_Framework",
"current_phase": "Phase 2",
"architecture": "Context-Oriented Configuration",
"patterns": ["PDCA Cycle", "Session Lifecycle"]
},
"last_session": {
"date": "2025-10-14",
"accomplished": ["Phase 1 complete"],
"issues": ["Serena MCP not configured"],
"learned": ["Session Lifecycle pattern"]
},
"next_actions": [
"Implement session_lifecycle.py",
"Configure Serena MCP",
"Test memory operations"
]
}
# Test memory operations
python -m SuperClaude.Core.memory_ops --test
docs/
โโโ temp/ # Temporary (7-day lifecycle)
โ โโโ hypothesis-YYYY-MM-DD.md
โ โโโ experiment-YYYY-MM-DD.md
โ โโโ lessons-YYYY-MM-DD.md
โโโ patterns/ # Formal patterns (ๆฐธไน
ไฟๅญ)
โ โโโ [pattern-name].md
โโโ mistakes/ # Mistake records (ๆฐธไน
ไฟๅญ)
โโโ mistake-YYYY-MM-DD.md
# Create cleanup script
scripts/cleanup_temp_docs.sh
# Run daily via cron
0 0 * * * /path/to/scripts/cleanup_temp_docs.sh
# Migrate successful experiments to patterns
python scripts/migrate_to_patterns.py
# Migrate failures to mistakes
python scripts/migrate_to_mistakes.py
Once research complete, implement auto-activation hooks:
# superclaude/Core/auto_activation.py (future)
def on_claude_code_start():
"""Auto-activate PM Agent at session start"""
session_lifecycle.on_session_start()
memory_ops.pysession_lifecycle.pypdca_engine.py.claude.jsondocs/temp/ templatedocs/patterns/ templatedocs/mistakes/ templatetests/
โโโ test_memory_ops.py # Memory operations
โโโ test_session_lifecycle.py # Session management
โโโ test_pdca_engine.py # PDCA automation
tests/integration/
โโโ test_pm_agent_flow.py # End-to-end PM Agent
โโโ test_serena_integration.py # Serena MCP integration
โโโ test_cross_session.py # Session persistence
"Serena MCP not connecting"
.claude.json configurationclaude mcp list"Memory operations failing"
"Context not restoring"
pm_context existsLast Verified: 2025-10-14 Next Review: 2025-10-21 (1 week) Version: 4.1.5