.taskmaster/templates/example_prd_rpg.md
This template teaches you (AI or human) how to create structured, dependency-aware PRDs using the RPG methodology from Microsoft Research. The key insight: separate WHAT (functional) from HOW (structural), then connect them with explicit dependencies.
<instruction> block<example> blocks to see good vs bad patternsWhen using this template to create a PRD (not parse it), use code-context-aware AI assistants for best results:
Why? The AI needs to understand your existing codebase to make good architectural decisions about modules, dependencies, and integration points.
Recommended tools:
Note: Once your PRD is created, task-master parse-prd works with any configured AI model - it just needs to read the PRD text itself, not your codebase.
</rpg-method>
Keep this section focused - don't jump into implementation details yet. </instruction>
[Describe the core problem. Be concrete about user pain points.]
[Define personas, their workflows, and what they're trying to achieve.]
[Quantifiable outcomes. Examples: "80% task completion via autopilot", "< 5% manual intervention rate"]
</overview>Step 1: Identify high-level capability domains
Step 2: For each capability, enumerate specific features
Step 3: For each feature, define:
Feature: Business rule validation - Description: Apply domain-specific validation rules - Inputs: Validated data object, rule set - Outputs: Boolean + list of violated rules - Behavior: Execute rules sequentially, short-circuit on failure </example>
<example type="bad"> Capability: validation.js (Problem: This is a FILE, not a CAPABILITY. Mixing structure into functional thinking.)Capability: Validation Feature: Make sure data is good (Problem: Too vague. No inputs/outputs. Not actionable.) </example> </instruction>
[Brief description of what this capability domain covers]
...
</functional-decomposition>Rules:
The goal: Create a clear mapping between "what it does" (functional) and "where it lives" (structural).
<example type="good"> Capability: Data Validation → Maps to: src/validation/ ├── schema-validator.js (Schema validation feature) ├── rule-validator.js (Business rule validation feature) └── index.js (Public exports)Exports:
Capability: Data Validation → Maps to: src/validation/everything.js (Problem: One giant file. Features should map to separate files for maintainability.) </example> </instruction>
project-root/
├── src/
│ ├── [module-name]/ # Maps to: [Capability Name]
│ │ ├── [file].js # Maps to: [Feature Name]
│ │ └── index.js # Public exports
│ └── [module-name]/
├── tests/
└── docs/
module-name/
├── feature1.js
├── feature2.js
└── index.js
functionName() - [what it does]ClassName - [what it does]Define explicit dependencies between modules. This creates the topological order for task execution.
Rules:
Data Layer:
Core Layer:
No dependencies - these are built first.
[Continue building up layers...]
</dependency-graph>Each phase should:
Phase ordering follows topological sort of dependency graph.
<example type="good"> Phase 0: Foundation Entry: Clean repository Tasks: - Implement error handling utilities - Create base type definitions - Setup configuration system Exit: Other modules can import foundation without errorsPhase 1: Data Layer Entry: Phase 0 complete Tasks: - Implement schema validator (uses: base types, error handling) - Build data ingestion pipeline (uses: validator, config) Exit: End-to-end data flow from input to validated output </example>
<example type="bad"> Phase 1: Build Everything Tasks: - API - Database - UI - Tests (Problem: No clear focus. Too broad. Dependencies not considered.) </example> </instruction>Goal: [What foundational capability this establishes]
Entry Criteria: [What must be true before starting]
Tasks:
[Task name] (depends on: [none or list])
[Task name] (depends on: [none or list])
Exit Criteria: [Observable outcome that proves phase complete]
Delivers: [What can users/developers do after this phase?]
Goal:
Entry Criteria: Phase 0 complete
Tasks:
Exit Criteria:
Delivers:
[Continue with more phases...]
</implementation-roadmap>Specify:
This section guides the AI when generating tests during the RED phase of TDD.
<example type="good"> Critical Test Scenarios for Data Validation module: - Happy path: Valid data passes all checks - Edge cases: Empty strings, null values, boundary numbers - Error cases: Invalid types, missing required fields - Integration: Validator works with ingestion pipeline </example> </instruction> /\
/E2E\ ← [X]% (End-to-end, slow, comprehensive)
/------\
/Integration\ ← [Y]% (Module interactions)
/------------\
/ Unit Tests \ ← [Z]% (Fast, isolated, deterministic)
/----------------\
Happy path:
Edge cases:
Error cases:
Integration points:
[Specific instructions for Surgical Test Generator about what to focus on, what patterns to follow, project-specific test conventions]
</test-strategy>Keep this section AFTER functional/structural decomposition - implementation details come after understanding structure. </instruction>
[Major architectural pieces and their responsibilities]
[Core data structures, schemas, database design]
[Languages, frameworks, key libraries]
Decision: [Technology/Pattern]
Categories:
Risk: [Description]
[External dependencies, blocking issues]
[Scope creep, underestimation, unclear requirements]
</risks>[Domain-specific terms]
[Things to resolve during development] </appendix>
When you run task-master parse-prd <file>.txt, the parser:
Extracts capabilities → Main tasks
### Capability: becomes a top-level taskExtracts features → Subtasks
#### Feature: becomes a subtask under its capabilityParses dependencies → Task dependencies
Depends on: [X, Y] sets task.dependencies = ["X", "Y"]Orders by phases → Task priorities
Uses test strategy → Test generation context
Result: A dependency-aware task graph that can be executed in topological order.
Traditional flat PRDs lead to:
RPG-structured PRDs provide:
task-master expand to break down complex taskstask-master parse-prd --research leverages AI for better task generation