get-shit-done/workflows/new-project.md
<required_reading> Read all files referenced by the invoking prompt's execution_context before starting. </required_reading>
<available_agent_types> Valid GSD subagent types (use exact names — do not fall back to 'general-purpose'):
<auto_mode>
Check if --auto flag is present in $ARGUMENTS.
If auto mode:
Document requirement: Auto mode requires an idea document — either:
/gsd-new-project --auto @prd.mdIf no document content provided, error:
Error: --auto requires an idea document.
Usage:
/gsd-new-project --auto @your-idea.md
/gsd-new-project --auto [paste or write your idea here]
The document should describe what you want to build.
</auto_mode>
<process>MANDATORY FIRST STEP — Execute these checks before ANY user interaction:
INIT=$(gsd-sdk query init.new-project)
if [[ "$INIT" == @file:* ]]; then INIT=$(cat "${INIT#@file:}"); fi
AGENT_SKILLS_RESEARCHER=$(gsd-sdk query agent-skills gsd-project-researcher)
AGENT_SKILLS_SYNTHESIZER=$(gsd-sdk query agent-skills gsd-research-synthesizer)
AGENT_SKILLS_ROADMAPPER=$(gsd-sdk query agent-skills gsd-roadmapper)
Parse JSON for: researcher_model, synthesizer_model, roadmapper_model, commit_docs, project_exists, has_codebase_map, planning_exists, has_existing_code, has_package_file, is_brownfield, needs_codebase_map, has_git, project_path, agents_installed, missing_agents, agent_runtime, agents_dir, required_agents, required_agents_installed, missing_required_agents, agent_skill_payloads_available, agent_skill_payload_agents.
If agents_installed is false: Display a warning before proceeding:
⚠ GSD agents not installed. The following agents are missing from your agents directory:
{missing_agents joined with newline}
Runtime checked: {agent_runtime}
Agents directory checked: {agents_dir}
Required new-project agents missing:
{missing_required_agents joined with newline, or "none"}
Agent skill payloads available: {agent_skill_payloads_available}
Agent skill payload agents:
{agent_skill_payload_agents joined with newline, or "none"}
Skill payloads only provide prompt context. Named subagent spawns still require agent
definitions to be installed for this runtime.
Subagent spawns (gsd-project-researcher, gsd-research-synthesizer, gsd-roadmapper) will fail
with "agent type not found" if `required_agents_installed` is false. Run the installer with --global to make agents available:
npx get-shit-done-cc@latest --global
Proceeding without research subagents — roadmap will be generated inline.
Skip Steps 6–7 (parallel research and synthesis) and proceed directly to roadmap creation in Step 8.
Detect runtime and set instruction file name:
Derive RUNTIME from the invoking prompt's execution_context path:
/.codex/ → RUNTIME=codex/.gemini/ → RUNTIME=gemini/.config/opencode/ or /.opencode/ → RUNTIME=opencodeRUNTIME=claudeIf execution_context path is not available, fall back to env vars:
if [ -n "$CODEX_HOME" ]; then RUNTIME="codex"
elif [ -n "$GEMINI_CONFIG_DIR" ]; then RUNTIME="gemini"
elif [ -n "$OPENCODE_CONFIG_DIR" ] || [ -n "$OPENCODE_CONFIG" ]; then RUNTIME="opencode"
else RUNTIME="claude"; fi
Set the instruction file variable:
if [ "$RUNTIME" = "codex" ]; then INSTRUCTION_FILE="AGENTS.md"; else INSTRUCTION_FILE="CLAUDE.md"; fi
All subsequent references to the project instruction file use $INSTRUCTION_FILE.
If project_exists is true: Error — project already initialized. Use /gsd-progress.
If has_git is false: Initialize git:
git init
If auto mode: Skip to Step 4 (assume greenfield, synthesize PROJECT.md from provided document).
If needs_codebase_map is true (from init — existing code detected but no codebase map):
Text mode (workflow.text_mode: true in config or --text flag): Set TEXT_MODE=true if --text is present in $ARGUMENTS OR text_mode from init JSON is true. When TEXT_MODE is active, replace every AskUserQuestion call with a plain-text numbered list and ask the user to type their choice number. This is required for non-Claude runtimes (OpenAI Codex, Gemini CLI, etc.) where AskUserQuestion is not available.
Use AskUserQuestion:
If "Map codebase first":
Run `/gsd-map-codebase` first, then return to `/gsd-new-project`
Exit command.
If "Skip mapping" OR needs_codebase_map is false: Continue to Step 3.
If auto mode: Collect config settings upfront before processing the idea document.
YOLO mode is implicit (auto = YOLO). Ask remaining config questions:
Round 1 — Core settings (3 questions, no Mode question):
AskUserQuestion([
{
header: "Granularity",
question: "How finely should scope be sliced into phases?",
multiSelect: false,
options: [
{ label: "Coarse (Recommended)", description: "Fewer, broader phases (3-5 phases, 1-3 plans each)" },
{ label: "Standard", description: "Balanced phase size (5-8 phases, 3-5 plans each)" },
{ label: "Fine", description: "Many focused phases (8-12 phases, 5-10 plans each)" }
]
},
{
header: "Execution",
question: "Run plans in parallel?",
multiSelect: false,
options: [
{ label: "Parallel (Recommended)", description: "Independent plans run simultaneously" },
{ label: "Sequential", description: "One plan at a time" }
]
},
{
header: "Git Tracking",
question: "Commit planning docs to git?",
multiSelect: false,
options: [
{ label: "Yes (Recommended)", description: "Planning docs tracked in version control" },
{ label: "No", description: "Keep .planning/ local-only (add to .gitignore)" }
]
}
])
Round 2 — Workflow agents (same as Step 5):
AskUserQuestion([
{
header: "Research",
question: "Research before planning each phase? (adds tokens/time)",
multiSelect: false,
options: [
{ label: "Yes (Recommended)", description: "Investigate domain, find patterns, surface gotchas" },
{ label: "No", description: "Plan directly from requirements" }
]
},
{
header: "Plan Check",
question: "Verify plans will achieve their goals? (adds tokens/time)",
multiSelect: false,
options: [
{ label: "Yes (Recommended)", description: "Catch gaps before execution starts" },
{ label: "No", description: "Execute plans without verification" }
]
},
{
header: "Verifier",
question: "Verify work satisfies requirements after each phase? (adds tokens/time)",
multiSelect: false,
options: [
{ label: "Yes (Recommended)", description: "Confirm deliverables match phase goals" },
{ label: "No", description: "Trust execution, skip verification" }
]
},
{
header: "AI Models",
question: "Which AI models for planning agents?",
multiSelect: false,
options: [
{ label: "Balanced (Recommended)", description: "Sonnet for most agents — good quality/cost ratio" },
{ label: "Quality", description: "Opus for research/roadmap — higher cost, deeper analysis" },
{ label: "Budget", description: "Haiku where possible — fastest, lowest cost" },
{ label: "Inherit", description: "Use the current session model for all agents (OpenCode /model)" }
]
}
])
Round 3 — PR body onboarding:
Ask which optional PRD-style sections /gsd-ship should append to generated PR bodies. These map to ship.pr_body_sections; selected sections are written with "enabled": true, unselected seeded sections are written with "enabled": false so the project can enable them later without editing ship.md.
Prefer lean/agile PRD sections that make the delivered increment clear: user stories, acceptance criteria, Definition of Done or release criteria, risks, dependencies, and stakeholder review.
AskUserQuestion([
{
header: "PR Body",
question: "Which optional PRD-style sections should /gsd-ship include in PR bodies?",
multiSelect: true,
options: [
{ label: "User Stories & Acceptance Criteria", description: "Append user-facing stories and acceptance checks from REQUIREMENTS.md" },
{ label: "Risks & Dependencies", description: "Append rollout risks, dependencies, and rollback notes from PLAN.md" },
{ label: "Success Metrics & Release Criteria", description: "Append measurable Definition of Done and release checks for stakeholder review" },
{ label: "Stakeholder Review & Approval", description: "Append approval checklist for projects that need sign-off traceability" }
]
}
])
Build ship.pr_body_sections from those choices. For selected options, set enabled: true; for seeded but unselected options, set enabled: false. If the user selects none, use "ship":{"pr_body_sections":[]}.
Create .planning/config.json with all settings (CLI fills in remaining defaults automatically):
mkdir -p .planning
gsd-sdk query config-new-project '{"mode":"yolo","granularity":"[selected]","parallelization":true|false,"commit_docs":true|false,"model_profile":"quality|balanced|budget|inherit","workflow":{"research":true|false,"plan_check":true|false,"verifier":true|false,"nyquist_validation":true|false,"auto_advance":true},"ship":{"pr_body_sections":[{"heading":"User Stories & Acceptance Criteria","enabled":true|false,"source":"REQUIREMENTS.md ## User Stories || REQUIREMENTS.md ## Acceptance Criteria","fallback":"- Acceptance criteria are covered by the linked requirements and verification evidence."},{"heading":"Risks & Dependencies","enabled":true|false,"source":"PLAN.md ## Risks || PLAN.md ## Dependencies","fallback":"- No known high-risk rollout dependencies."},{"heading":"Success Metrics & Release Criteria","enabled":true|false,"source":"REQUIREMENTS.md ## Definition of Done || VERIFICATION.md ## Release Criteria","fallback":"- Release when automated verification and required manual checks pass."},{"heading":"Stakeholder Review & Approval","enabled":true|false,"template":"- Product owner approval pending for {phase_name}."}]}}'
If commit_docs = No: Add .planning/ to .gitignore.
Commit config.json:
mkdir -p .planning
gsd-sdk query commit "chore: add project config" --files .planning/config.json
Persist auto-advance chain flag to config (survives context compaction):
gsd-sdk query config-set workflow._auto_chain_active true
Proceed to Step 4 (skip Steps 3 and 5).
Check for existing spike and sketch work that should inform project setup:
# Check for spike findings skill (project-local)
SPIKE_SKILL=$(ls ./.claude/skills/spike-findings-*/SKILL.md 2>/dev/null | head -1 || true)
# Check for sketch findings skill (project-local)
SKETCH_SKILL=$(ls ./.claude/skills/sketch-findings-*/SKILL.md 2>/dev/null | head -1 || true)
# Check for raw spikes/sketches in .planning/
HAS_SPIKES=$(ls .planning/spikes/MANIFEST.md 2>/dev/null)
HAS_SKETCHES=$(ls .planning/sketches/MANIFEST.md 2>/dev/null)
If any of these exist, surface them before questioning:
⚡ Prior exploration detected:
{if SPIKE_SKILL} ✓ Spike findings skill: {path} — validated patterns from experiments
{if SKETCH_SKILL} ✓ Sketch findings skill: {path} — validated design decisions
{if HAS_SPIKES && !SPIKE_SKILL} ◆ Raw spikes in .planning/spikes/ — consider `/gsd-spike --wrap-up` to package findings
{if HAS_SKETCHES && !SKETCH_SKILL} ◆ Raw sketches in .planning/sketches/ — consider `/gsd-sketch --wrap-up` to package findings
These findings will be incorporated into project context and available to planning agents.
If spike/sketch findings skills exist, read their SKILL.md files to inform the questioning phase — they contain validated patterns, constraints, and design decisions that should shape the project definition.
If auto mode: Skip (already handled in Step 2a). Extract project context from provided document instead and proceed to Step 4.
Display stage banner:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GSD ► QUESTIONING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Open the conversation:
Ask inline (freeform, NOT AskUserQuestion):
"What do you want to build?"
Wait for their response. This gives you the context needed to ask intelligent follow-up questions.
Research-before-questions mode: Check if workflow.research_before_questions is enabled in .planning/config.json (or the config from init context). When enabled, before asking follow-up questions about a topic area:
When disabled (default), ask questions directly as before.
Follow the thread:
Based on what they said, ask follow-up questions that dig into their response. Use AskUserQuestion with options that probe what they mentioned — interpretations, clarifications, concrete examples.
Keep following threads. Each answer opens new threads to explore. Ask about:
Consult questioning.md for techniques:
Check context (background, not out loud):
As you go, mentally check the context checklist from questioning.md. If gaps remain, weave questions naturally. Don't suddenly switch to checklist mode.
Decision gate:
When you could write a clear PROJECT.md, use AskUserQuestion:
If "Keep exploring" — ask what they want to add, or identify gaps and probe naturally.
Loop until "Create PROJECT.md" selected.
If auto mode: Synthesize from provided document. No "Ready?" gate was shown — proceed directly to commit.
Synthesize all context into .planning/PROJECT.md using the template from templates/project.md.
For greenfield projects:
Initialize requirements as hypotheses:
## Requirements
### Validated
(None yet — ship to validate)
### Active
- [ ] [Requirement 1]
- [ ] [Requirement 2]
- [ ] [Requirement 3]
### Out of Scope
- [Exclusion 1] — [why]
- [Exclusion 2] — [why]
All Active requirements are hypotheses until shipped and validated.
For brownfield projects (codebase map exists):
Infer Validated requirements from existing code:
.planning/codebase/ARCHITECTURE.md and STACK.md## Requirements
### Validated
- ✓ [Existing capability 1] — existing
- ✓ [Existing capability 2] — existing
- ✓ [Existing capability 3] — existing
### Active
- [ ] [New requirement 1]
- [ ] [New requirement 2]
### Out of Scope
- [Exclusion 1] — [why]
Key Decisions:
Initialize with any decisions made during questioning:
## Key Decisions
| Decision | Rationale | Outcome |
|----------|-----------|---------|
| [Choice from questioning] | [Why] | — Pending |
Last updated footer:
---
*Last updated: [date] after initialization*
Evolution section (include at the end of PROJECT.md, before the footer):
## Evolution
This document evolves at phase transitions and milestone boundaries.
**After each phase transition** (via `/gsd-transition`):
1. Requirements invalidated? → Move to Out of Scope with reason
2. Requirements validated? → Move to Validated with phase reference
3. New requirements emerged? → Add to Active
4. Decisions to log? → Add to Key Decisions
5. "What This Is" still accurate? → Update if drifted
**After each milestone** (via `/gsd-complete-milestone`):
1. Full review of all sections
2. Core Value check — still the right priority?
3. Audit Out of Scope — reasons still valid?
4. Update Context with current state
Do not compress. Capture everything gathered.
Commit PROJECT.md:
mkdir -p .planning
gsd-sdk query commit "docs: initialize project" --files .planning/PROJECT.md
If auto mode: Skip — config was collected in Step 2a. Proceed to Step 5.5.
Check for global defaults at ~/.gsd/defaults.json. If the file exists, read and display its contents before asking:
DEFAULTS_RAW=$(cat ~/.gsd/defaults.json 2>/dev/null)
Format the JSON into human-readable bullets using these label mappings:
mode → "Mode"granularity → "Granularity"parallelization → "Execution" (true → "Parallel", false → "Sequential")commit_docs → "Git Tracking" (true → "Yes", false → "No")model_profile → "AI Models"workflow.research → "Research" (true → "Yes", false → "No")workflow.plan_check → "Plan Check" (true → "Yes", false → "No")workflow.verifier → "Verifier" (true → "Yes", false → "No")Display above the prompt:
Your saved defaults (~/.gsd/defaults.json):
• Mode: [value]
• Granularity: [value]
• Execution: [Parallel|Sequential]
• Git Tracking: [Yes|No]
• AI Models: [value]
• Research: [Yes|No]
• Plan Check: [Yes|No]
• Verifier: [Yes|No]
Then ask:
AskUserQuestion([
{
question: "Use these saved defaults?",
header: "Defaults",
multiSelect: false,
options: [
{ label: "Use as-is (Recommended)", description: "Proceed with the defaults shown above" },
{ label: "Modify some settings", description: "Keep defaults, change a few" },
{ label: "Configure fresh", description: "Walk through all questions from scratch" }
]
}
])
If "Use as-is": use the defaults values for config.json and skip directly to Commit config.json below.
If "Modify some settings": present a selection of every setting with its current saved value.
If TEXT_MODE is active (non-Claude runtimes): display a numbered list and ask the user to type the numbers of settings they want to change (comma-separated). Parse the response and proceed.
Which settings do you want to change? (enter numbers, comma-separated)
1. Mode — Currently: [value]
2. Granularity — Currently: [value]
3. Execution — Currently: [Parallel|Sequential]
4. Git Tracking — Currently: [Yes|No]
5. AI Models — Currently: [value]
6. Research — Currently: [Yes|No]
7. Plan Check — Currently: [Yes|No]
8. Verifier — Currently: [Yes|No]
Otherwise (Claude runtime with AskUserQuestion): use multiSelect:
AskUserQuestion([
{
question: "Which settings do you want to change?",
header: "Change Settings",
multiSelect: true,
options: [
{ label: "Mode", description: "Currently: [value]" },
{ label: "Granularity", description: "Currently: [value]" },
{ label: "Execution", description: "Currently: [Parallel|Sequential]" },
{ label: "Git Tracking", description: "Currently: [Yes|No]" },
{ label: "AI Models", description: "Currently: [value]" },
{ label: "Research", description: "Currently: [Yes|No]" },
{ label: "Plan Check", description: "Currently: [Yes|No]" },
{ label: "Verifier", description: "Currently: [Yes|No]" }
]
}
])
For each selected setting, ask only that question using the option set from Round 1 / Round 2 below. Merge user answers over the saved defaults — unchanged settings retain their saved values. Then skip to Commit config.json.
If "Configure fresh" or ~/.gsd/defaults.json doesn't exist: proceed with the questions below.
Round 1 — Core workflow settings (4 questions):
questions: [
{
header: "Mode",
question: "How do you want to work?",
multiSelect: false,
options: [
{ label: "YOLO (Recommended)", description: "Auto-approve, just execute" },
{ label: "Interactive", description: "Confirm at each step" }
]
},
{
header: "Granularity",
question: "How finely should scope be sliced into phases?",
multiSelect: false,
options: [
{ label: "Coarse", description: "Fewer, broader phases (3-5 phases, 1-3 plans each)" },
{ label: "Standard", description: "Balanced phase size (5-8 phases, 3-5 plans each)" },
{ label: "Fine", description: "Many focused phases (8-12 phases, 5-10 plans each)" }
]
},
{
header: "Execution",
question: "Run plans in parallel?",
multiSelect: false,
options: [
{ label: "Parallel (Recommended)", description: "Independent plans run simultaneously" },
{ label: "Sequential", description: "One plan at a time" }
]
},
{
header: "Git Tracking",
question: "Commit planning docs to git?",
multiSelect: false,
options: [
{ label: "Yes (Recommended)", description: "Planning docs tracked in version control" },
{ label: "No", description: "Keep .planning/ local-only (add to .gitignore)" }
]
}
]
Round 2 — Workflow agents:
These spawn additional agents during planning/execution. They add tokens and time but improve quality.
| Agent | When it runs | What it does |
|---|---|---|
| Researcher | Before planning each phase | Investigates domain, finds patterns, surfaces gotchas |
| Plan Checker | After plan is created | Verifies plan actually achieves the phase goal |
| Verifier | After phase execution | Confirms must-haves were delivered |
All recommended for important projects. Skip for quick experiments.
questions: [
{
header: "Research",
question: "Research before planning each phase? (adds tokens/time)",
multiSelect: false,
options: [
{ label: "Yes (Recommended)", description: "Investigate domain, find patterns, surface gotchas" },
{ label: "No", description: "Plan directly from requirements" }
]
},
{
header: "Plan Check",
question: "Verify plans will achieve their goals? (adds tokens/time)",
multiSelect: false,
options: [
{ label: "Yes (Recommended)", description: "Catch gaps before execution starts" },
{ label: "No", description: "Execute plans without verification" }
]
},
{
header: "Verifier",
question: "Verify work satisfies requirements after each phase? (adds tokens/time)",
multiSelect: false,
options: [
{ label: "Yes (Recommended)", description: "Confirm deliverables match phase goals" },
{ label: "No", description: "Trust execution, skip verification" }
]
},
{
header: "AI Models",
question: "Which AI models for planning agents?",
multiSelect: false,
options: [
{ label: "Balanced (Recommended)", description: "Sonnet for most agents — good quality/cost ratio" },
{ label: "Quality", description: "Opus for research/roadmap — higher cost, deeper analysis" },
{ label: "Budget", description: "Haiku where possible — fastest, lowest cost" },
{ label: "Inherit", description: "Use the current session model for all agents (OpenCode /model)" }
]
}
]
PR body onboarding: Ask which optional PRD-style sections /gsd-ship should append to generated PR bodies. Use the same ship.pr_body_sections mapping as Step 2a: selected sections get enabled: true, seeded-but-unselected sections get enabled: false, and selecting none writes an empty list. Prefer lean/agile PRD sections that make user value, acceptance criteria, Definition of Done, and stakeholder traceability explicit.
Recommended options:
User Stories & Acceptance CriteriaRisks & DependenciesSuccess Metrics & Release CriteriaStakeholder Review & ApprovalCreate .planning/config.json with all settings (CLI fills in remaining defaults automatically):
mkdir -p .planning
gsd-sdk query config-new-project '{"mode":"[yolo|interactive]","granularity":"[selected]","parallelization":true|false,"commit_docs":true|false,"model_profile":"quality|balanced|budget|inherit","workflow":{"research":true|false,"plan_check":true|false,"verifier":true|false,"nyquist_validation":[false if granularity=coarse, true otherwise]},"ship":{"pr_body_sections":[{"heading":"User Stories & Acceptance Criteria","enabled":true|false,"source":"REQUIREMENTS.md ## User Stories || REQUIREMENTS.md ## Acceptance Criteria","fallback":"- Acceptance criteria are covered by the linked requirements and verification evidence."},{"heading":"Risks & Dependencies","enabled":true|false,"source":"PLAN.md ## Risks || PLAN.md ## Dependencies","fallback":"- No known high-risk rollout dependencies."},{"heading":"Success Metrics & Release Criteria","enabled":true|false,"source":"REQUIREMENTS.md ## Definition of Done || VERIFICATION.md ## Release Criteria","fallback":"- Release when automated verification and required manual checks pass."},{"heading":"Stakeholder Review & Approval","enabled":true|false,"template":"- Product owner approval pending for {phase_name}."}]}}'
Note: Run /gsd-settings anytime to update model profile, workflow agents, branching strategy, and other preferences.
If commit_docs = No:
commit_docs: false in config.json.planning/ to .gitignore (create if needed)If commit_docs = Yes:
Commit config.json:
gsd-sdk query commit "chore: add project config" --files .planning/config.json
Detect multi-repo workspace:
Check for directories with their own .git folders (separate repos within the workspace):
find . -maxdepth 1 -type d -not -name ".*" -not -name "node_modules" -exec test -d "{}/.git" \; -print
If sub-repos found:
Strip the ./ prefix to get directory names (e.g., ./backend → backend).
Use AskUserQuestion:
If user selects one or more directories:
planning.sub_repos in config.json to the selected directory names array (e.g., ["backend", "frontend"])planning.commit_docs to false (planning docs stay local in multi-repo workspaces).planning/ to .gitignore if not already presentConfig changes are saved locally — no commit needed since commit_docs is false in multi-repo mode.
If no sub-repos found or user selects none: Continue with no changes to config.
Use models from init: researcher_model, synthesizer_model, roadmapper_model.
If auto mode: Default to "Research first" without asking.
Use AskUserQuestion:
If "Research first":
Display stage banner:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GSD ► RESEARCHING
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Researching [domain] ecosystem...
Create research directory:
mkdir -p .planning/research
Determine milestone context:
Check if this is greenfield or subsequent milestone:
Display spawning indicator:
◆ Spawning 4 researchers in parallel...
→ Stack research
→ Features research
→ Architecture research
→ Pitfalls research
Spawn 4 parallel gsd-project-researcher agents with path references:
Agent(prompt="<research_type>
Project Research — Stack dimension for [domain].
</research_type>
<milestone_context>
[greenfield OR subsequent]
Greenfield: Research the standard stack for building [domain] from scratch.
Subsequent: Research what's needed to add [target features] to an existing [domain] app. Don't re-research the existing system.
</milestone_context>
<question>
What's the standard 2025 stack for [domain]?
</question>
<files_to_read>
- {project_path} (Project context and goals)
</files_to_read>
${AGENT_SKILLS_RESEARCHER}
<downstream_consumer>
Your STACK.md feeds into roadmap creation. Be prescriptive:
- Specific libraries with versions
- Clear rationale for each choice
- What NOT to use and why
</downstream_consumer>
<quality_gate>
- [ ] Versions are current (verify with Context7/official docs, not training data)
- [ ] Rationale explains WHY, not just WHAT
- [ ] Confidence levels assigned to each recommendation
</quality_gate>
<output>
Write to: .planning/research/STACK.md
Use template: ~/.claude/get-shit-done/templates/research-project/STACK.md
</output>
", subagent_type="gsd-project-researcher", model="{researcher_model}", description="Stack research")
Agent(prompt="<research_type>
Project Research — Features dimension for [domain].
</research_type>
<milestone_context>
[greenfield OR subsequent]
Greenfield: What features do [domain] products have? What's table stakes vs differentiating?
Subsequent: How do [target features] typically work? What's expected behavior?
</milestone_context>
<question>
What features do [domain] products have? What's table stakes vs differentiating?
</question>
<files_to_read>
- {project_path} (Project context)
</files_to_read>
${AGENT_SKILLS_RESEARCHER}
<downstream_consumer>
Your FEATURES.md feeds into requirements definition. Categorize clearly:
- Table stakes (must have or users leave)
- Differentiators (competitive advantage)
- Anti-features (things to deliberately NOT build)
</downstream_consumer>
<quality_gate>
- [ ] Categories are clear (table stakes vs differentiators vs anti-features)
- [ ] Complexity noted for each feature
- [ ] Dependencies between features identified
</quality_gate>
<output>
Write to: .planning/research/FEATURES.md
Use template: ~/.claude/get-shit-done/templates/research-project/FEATURES.md
</output>
", subagent_type="gsd-project-researcher", model="{researcher_model}", description="Features research")
Agent(prompt="<research_type>
Project Research — Architecture dimension for [domain].
</research_type>
<milestone_context>
[greenfield OR subsequent]
Greenfield: How are [domain] systems typically structured? What are major components?
Subsequent: How do [target features] integrate with existing [domain] architecture?
</milestone_context>
<question>
How are [domain] systems typically structured? What are major components?
</question>
<files_to_read>
- {project_path} (Project context)
</files_to_read>
${AGENT_SKILLS_RESEARCHER}
<downstream_consumer>
Your ARCHITECTURE.md informs phase structure in roadmap. Include:
- Component boundaries (what talks to what)
- Data flow (how information moves)
- Suggested build order (dependencies between components)
</downstream_consumer>
<quality_gate>
- [ ] Components clearly defined with boundaries
- [ ] Data flow direction explicit
- [ ] Build order implications noted
</quality_gate>
<output>
Write to: .planning/research/ARCHITECTURE.md
Use template: ~/.claude/get-shit-done/templates/research-project/ARCHITECTURE.md
</output>
", subagent_type="gsd-project-researcher", model="{researcher_model}", description="Architecture research")
Agent(prompt="<research_type>
Project Research — Pitfalls dimension for [domain].
</research_type>
<milestone_context>
[greenfield OR subsequent]
Greenfield: What do [domain] projects commonly get wrong? Critical mistakes?
Subsequent: What are common mistakes when adding [target features] to [domain]?
</milestone_context>
<question>
What do [domain] projects commonly get wrong? Critical mistakes?
</question>
<files_to_read>
- {project_path} (Project context)
</files_to_read>
${AGENT_SKILLS_RESEARCHER}
<downstream_consumer>
Your PITFALLS.md prevents mistakes in roadmap/planning. For each pitfall:
- Warning signs (how to detect early)
- Prevention strategy (how to avoid)
- Which phase should address it
</downstream_consumer>
<quality_gate>
- [ ] Pitfalls are specific to this domain (not generic advice)
- [ ] Prevention strategies are actionable
- [ ] Phase mapping included where relevant
</quality_gate>
<output>
Write to: .planning/research/PITFALLS.md
Use template: ~/.claude/get-shit-done/templates/research-project/PITFALLS.md
</output>
", subagent_type="gsd-project-researcher", model="{researcher_model}", description="Pitfalls research")
ORCHESTRATOR RULE — CODEX RUNTIME: After calling all 4 researcher Agent() calls above, do NOT read research files or synthesize content independently while the subagents are active. Wait for all 4 researchers to complete before spawning the synthesizer. This prevents duplicate work and wasted context.
After all 4 agents complete, spawn synthesizer to create SUMMARY.md:
Agent(prompt="
<task>
Synthesize research outputs into SUMMARY.md.
</task>
<files_to_read>
- .planning/research/STACK.md
- .planning/research/FEATURES.md
- .planning/research/ARCHITECTURE.md
- .planning/research/PITFALLS.md
</files_to_read>
${AGENT_SKILLS_SYNTHESIZER}
<output>
Write to: .planning/research/SUMMARY.md
Use template: ~/.claude/get-shit-done/templates/research-project/SUMMARY.md
Commit after writing.
</output>
", subagent_type="gsd-research-synthesizer", model="{synthesizer_model}", description="Synthesize research")
ORCHESTRATOR RULE — CODEX RUNTIME: After calling Agent() above, stop working on this task immediately. Do not read more files, edit code, or run tests related to this task while the subagent is active. Wait for the subagent to return its result. This prevents duplicate work, conflicting edits, and wasted context. Only resume when the subagent result is available.
Display research complete banner and key findings:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GSD ► RESEARCH COMPLETE ✓
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
## Key Findings
**Stack:** [from SUMMARY.md]
**Table Stakes:** [from SUMMARY.md]
**Watch Out For:** [from SUMMARY.md]
Files: `.planning/research/`
If "Skip research": Continue to Step 7.
Display stage banner:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GSD ► DEFINING REQUIREMENTS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Load context:
Read PROJECT.md and extract:
If research exists: Read research/FEATURES.md and extract feature categories.
If auto mode:
Present features by category (interactive mode only):
Here are the features for [domain]:
## Authentication
**Table stakes:**
- Sign up with email/password
- Email verification
- Password reset
- Session management
**Differentiators:**
- Magic link login
- OAuth (Google, GitHub)
- 2FA
**Research notes:** [any relevant notes]
---
## [Next Category]
...
If no research: Gather requirements through conversation instead.
Ask: "What are the main things users need to be able to do?"
For each capability mentioned:
Scope each category:
For each category, use AskUserQuestion:
Track responses:
Identify gaps:
Use AskUserQuestion:
Validate core value:
Cross-check requirements against Core Value from PROJECT.md. If gaps detected, surface them.
Generate REQUIREMENTS.md:
Create .planning/REQUIREMENTS.md with:
REQ-ID format: [CATEGORY]-[NUMBER] (AUTH-01, CONTENT-02)
Requirement quality criteria:
Good requirements are:
Reject vague requirements. Push for specificity:
Present full requirements list (interactive mode only):
Show every requirement (not counts) for user confirmation:
## v1 Requirements
### Authentication
- [ ] **AUTH-01**: User can create account with email/password
- [ ] **AUTH-02**: User can log in and stay logged in across sessions
- [ ] **AUTH-03**: User can log out from any page
### Content
- [ ] **CONT-01**: User can create posts with text
- [ ] **CONT-02**: User can edit their own posts
[... full list ...]
---
Does this capture what you're building? (yes / adjust)
If "adjust": Return to scoping.
Commit requirements:
gsd-sdk query commit "docs: define v1 requirements" --files .planning/REQUIREMENTS.md
If auto mode: Set PROJECT_MODE=mvp and skip this prompt.
Mode prompt: Vertical MVP vs Horizontal Layers.
Ask the user how they want to structure the project. Use AskUserQuestion with two options:
Set PROJECT_MODE=mvp if the user picks Vertical MVP, otherwise PROJECT_MODE=standard.
When TEXT_MODE=true (per the workflow's existing TEXT_MODE handling for non-Claude runtimes), present the same two options as a plain-text numbered list and ask the user to type their choice number.
Display stage banner:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GSD ► CREATING ROADMAP
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
◆ Spawning roadmapper...
ROADMAP.md template — mode-aware emit. When generating the initial ROADMAP.md:
PROJECT_MODE=mvp: under each ### Phase N: header, emit **Mode:** mvp on the line immediately following **Goal:**. This sets every initial phase to MVP mode (per Phase-4-Persistence decision: per-phase mode, not project-wide config).PROJECT_MODE=standard: emit the standard ROADMAP.md template with no **Mode:** lines (Horizontal Layers standard template — no behavioral change for users who pick Horizontal Layers).Example MVP-mode emit for Phase 1:
### Phase 1: [Name]
**Goal:** [Goal]
**Mode:** mvp
**Success Criteria**:
1. [Criterion]
Pass PROJECT_MODE to the roadmapper so it applies the correct template.
Spawn gsd-roadmapper agent with path references:
Agent(prompt="
<planning_context>
<files_to_read>
- .planning/PROJECT.md (Project context)
- .planning/REQUIREMENTS.md (v1 Requirements)
- .planning/research/SUMMARY.md (Research findings - if exists)
- .planning/config.json (Granularity and mode settings)
</files_to_read>
${AGENT_SKILLS_ROADMAPPER}
</planning_context>
<instructions>
Create roadmap:
1. Derive phases from requirements (don't impose structure)
2. Map every v1 requirement to exactly one phase
3. Derive 2-5 success criteria per phase (observable user behaviors)
4. Validate 100% coverage
5. Write files immediately (ROADMAP.md, STATE.md, update REQUIREMENTS.md traceability)
6. Return ROADMAP CREATED with summary
Write files first, then return. This ensures artifacts persist even if context is lost.
</instructions>
", subagent_type="gsd-roadmapper", model="{roadmapper_model}", description="Create roadmap")
ORCHESTRATOR RULE — CODEX RUNTIME: After calling Agent() above, stop working on this task immediately. Do not read more files, edit code, or run tests related to this task while the subagent is active. Wait for the subagent to return its result. This prevents duplicate work, conflicting edits, and wasted context. Only resume when the subagent result is available.
Handle roadmapper return:
If ## ROADMAP BLOCKED:
If ## ROADMAP CREATED:
Read the created ROADMAP.md and present it nicely inline:
---
## Proposed Roadmap
**[N] phases** | **[X] requirements mapped** | All v1 requirements covered ✓
| # | Phase | Goal | Requirements | Success Criteria |
|---|-------|------|--------------|------------------|
| 1 | [Name] | [Goal] | [REQ-IDs] | [count] |
| 2 | [Name] | [Goal] | [REQ-IDs] | [count] |
| 3 | [Name] | [Goal] | [REQ-IDs] | [count] |
...
### Phase Details
**Phase 1: [Name]**
Goal: [goal]
Requirements: [REQ-IDs]
Success criteria:
1. [criterion]
2. [criterion]
3. [criterion]
**Phase 2: [Name]**
Goal: [goal]
Requirements: [REQ-IDs]
Success criteria:
1. [criterion]
2. [criterion]
[... continue for all phases ...]
---
If auto mode: Skip approval gate — auto-approve and commit directly.
CRITICAL: Ask for approval before committing (interactive mode only):
Use AskUserQuestion:
If "Approve": Continue to commit.
If "Adjust phases":
Get user's adjustment notes
Re-spawn roadmapper with revision context:
Agent(prompt="
<revision>
User feedback on roadmap:
[user's notes]
<files_to_read>
- .planning/ROADMAP.md (Current roadmap to revise)
</files_to_read>
${AGENT_SKILLS_ROADMAPPER}
Update the roadmap based on feedback. Edit files in place.
Return ROADMAP REVISED with changes made.
</revision>
", subagent_type="gsd-roadmapper", model="{roadmapper_model}", description="Revise roadmap")
ORCHESTRATOR RULE — CODEX RUNTIME: After calling Agent() above, stop working on this task immediately. Do not read more files, edit code, or run tests related to this task while the subagent is active. Wait for the subagent to return its result. This prevents duplicate work, conflicting edits, and wasted context. Only resume when the subagent result is available.
Present revised roadmap
Loop until user approves
If "Review full file": Display raw cat .planning/ROADMAP.md, then re-ask.
Generate or refresh project instruction file before final commit:
gsd-sdk query generate-claude-md --output "$INSTRUCTION_FILE"
This ensures new projects get the default GSD workflow-enforcement guidance and current project context in $INSTRUCTION_FILE.
Commit roadmap (after approval or auto mode):
gsd-sdk query commit "docs: create roadmap ([N] phases)" --files .planning/ROADMAP.md .planning/STATE.md .planning/REQUIREMENTS.md "$INSTRUCTION_FILE"
Present completion summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
GSD ► PROJECT INITIALIZED ✓
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
**[Project Name]**
| Artifact | Location |
|----------------|-----------------------------|
| Project | `.planning/PROJECT.md` |
| Config | `.planning/config.json` |
| Research | `.planning/research/` |
| Requirements | `.planning/REQUIREMENTS.md` |
| Roadmap | `.planning/ROADMAP.md` |
| Project guide | `$INSTRUCTION_FILE` |
**[N] phases** | **[X] requirements** | Ready to build ✓
If auto mode:
╔══════════════════════════════════════════╗
║ AUTO-ADVANCING → DISCUSS PHASE 1 ║
╚══════════════════════════════════════════╝
Exit skill and invoke SlashCommand("/gsd-discuss-phase 1 --auto")
If interactive mode:
Check if Phase 1 has UI indicators (look for **UI hint**: yes in Phase 1 detail section of ROADMAP.md):
PHASE1_SECTION=$(gsd-sdk query roadmap.get-phase 1 2>/dev/null)
PHASE1_HAS_UI=$(echo "$PHASE1_SECTION" | grep -qi "UI hint.*yes" && echo "true" || echo "false")
If Phase 1 has UI (PHASE1_HAS_UI is true):
───────────────────────────────────────────────────────────────
## ▶ Next Up — [${PROJECT_CODE}] ${PROJECT_TITLE}
**Phase 1: [Phase Name]** — [Goal from ROADMAP.md]
/clear then:
/gsd-discuss-phase 1 — gather context and clarify approach
---
**Also available:**
- /gsd-ui-phase 1 — generate UI design contract (recommended for frontend phases)
- /gsd-plan-phase 1 — skip discussion, plan directly
───────────────────────────────────────────────────────────────
If Phase 1 has no UI:
───────────────────────────────────────────────────────────────
## ▶ Next Up — [${PROJECT_CODE}] ${PROJECT_TITLE}
**Phase 1: [Phase Name]** — [Goal from ROADMAP.md]
/clear then:
/gsd-discuss-phase 1 — gather context and clarify approach
---
**Also available:**
- /gsd-plan-phase 1 — skip discussion, plan directly
───────────────────────────────────────────────────────────────
.planning/PROJECT.md.planning/config.json.planning/research/ (if research selected)
STACK.mdFEATURES.mdARCHITECTURE.mdPITFALLS.mdSUMMARY.md.planning/REQUIREMENTS.md.planning/ROADMAP.md.planning/STATE.md$INSTRUCTION_FILE (AGENTS.md for Codex, CLAUDE.md for all other runtimes)<success_criteria>
$INSTRUCTION_FILE generated with GSD workflow guidance (AGENTS.md for Codex, CLAUDE.md otherwise)/gsd-discuss-phase 1Atomic commits: Each phase commits its artifacts immediately. If context is lost, artifacts persist.
</success_criteria>