src/bmm-skills/1-analysis/bmad-document-project/workflows/full-scan-instructions.md
<critical>This workflow performs complete project documentation (Steps 1-12)</critical>
<critical>Handles: initial_scan and full_rescan modes</critical>
<critical>YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the configured {communication_language}</critical>
<critical>YOU MUST ALWAYS WRITE all artifact and document content in {document_output_language}</critical>
<action>Display explanation to user:
How Project Type Detection Works:
This workflow uses a single comprehensive CSV file to intelligently document your project:
documentation-requirements.csv (../documentation-requirements.csv)
When Documentation Requirements are Loaded:
<action>Now loading documentation requirements data for fresh start...</action>
<action>Load documentation-requirements.csv from: ../documentation-requirements.csv</action> <action>Store all 12 rows indexed by project_type_id for project detection and requirements lookup</action> <action>Display: "Loaded documentation requirements for 12 project types (web, mobile, backend, cli, library, desktop, game, data, extension, infra, embedded)"</action>
<action>Display: "✓ Documentation requirements loaded successfully. Ready to begin project analysis."</action> </step>
<step n="0.6" goal="Check for existing documentation and determine workflow mode"> <action>Check if {project_knowledge}/index.md exists</action> <check if="index.md exists"> <action>Read existing index.md to extract metadata (date, project structure, parts count)</action> <action>Store as {{existing_doc_date}}, {{existing_structure}}</action><ask>I found existing documentation generated on {{existing_doc_date}}.
What would you like to do?
Your choice [1/2/3]: </ask>
<check if="user selects 1"> <action>Set workflow_mode = "full_rescan"</action> <action>Continue to scan level selection below</action> </check> <check if="user selects 2"> <action>Set workflow_mode = "deep_dive"</action> <action>Set scan_level = "exhaustive"</action> <action>Initialize state file with mode=deep_dive, scan_level=exhaustive</action> <action>Jump to Step 13</action> </check> <check if="user selects 3"> <action>Display message: "Keeping existing documentation. Exiting workflow."</action> <action>Exit workflow</action> </check> </check> <check if="index.md does not exist"> <action>Set workflow_mode = "initial_scan"</action> <action>Continue to scan level selection below</action> </check><action if="workflow_mode != deep_dive">Select Scan Level</action>
<check if="workflow_mode == initial_scan OR workflow_mode == full_rescan"> <ask>Choose your scan depth level:1. Quick Scan (2-5 minutes) [DEFAULT]
2. Deep Scan (10-30 minutes)
3. Exhaustive Scan (30-120 minutes)
Your choice [1/2/3] (default: 1): </ask>
<action if="user selects 1 OR user presses enter"> <action>Set scan_level = "quick"</action> <action>Display: "Using Quick Scan (pattern-based, no source file reading)"</action> </action> <action if="user selects 2"> <action>Set scan_level = "deep"</action> <action>Display: "Using Deep Scan (reading critical files per project type)"</action> </action> <action if="user selects 3"> <action>Set scan_level = "exhaustive"</action> <action>Display: "Using Exhaustive Scan (reading all source files)"</action> </action><action>Initialize state file: {project_knowledge}/project-scan-report.json</action> <critical>Every time you touch the state file, record: step id, human-readable summary (what you actually did), precise timestamp, and any outputs written. Vague phrases are unacceptable.</critical> <action>Write initial state: { "workflow_version": "1.2.0", "timestamps": {"started": "{{current_timestamp}}", "last_updated": "{{current_timestamp}}"}, "mode": "{{workflow_mode}}", "scan_level": "{{scan_level}}", "project_root": "{{project_root_path}}", "project_knowledge": "{{project_knowledge}}", "completed_steps": [], "current_step": "step_1", "findings": {}, "outputs_generated": ["project-scan-report.json"], "resume_instructions": "Starting from step 1" } </action> <action>Continue with standard workflow from Step 1</action> </check> </step>
<step n="1" goal="Detect project structure and classify project type" if="workflow_mode != deep_dive"> <action>Ask user: "What is the root directory of the project to document?" (default: current working directory)</action> <action>Store as {{project_root_path}}</action><action>Scan {{project_root_path}} for key indicators:
<action>Detect if project is:
Is this correct? Should I document each part separately? [y/n] </ask>
<action if="user confirms">Set repository_type = "monorepo" or "multi-part"</action> <action if="user confirms">For each detected part: - Identify root path - Run project type detection using key_file_patterns from documentation-requirements.csv - Store as part in project_parts array </action>
<action if="user denies or corrects">Ask user to specify correct parts and their paths</action> </check>
<check if="single cohesive project detected"> <action>Set repository_type = "monolith"</action> <action>Create single part in project_parts array with root_path = {{project_root_path}}</action> <action>Run project type detection using key_file_patterns from documentation-requirements.csv</action> </check><action>For each part, match detected technologies and file patterns against key_file_patterns column in documentation-requirements.csv</action> <action>Assign project_type_id to each part</action> <action>Load corresponding documentation_requirements row for each part</action>
<ask>I've classified this project: {{project_classification_summary}}
Does this look correct? [y/n/edit] </ask>
<template-output>project_structure</template-output> <template-output>project_parts_metadata</template-output>
<action>IMMEDIATELY update state file with step completion:
<action>PURGE detailed scan results from memory, keep only summary: "{{repository_type}}, {{parts_count}} parts, {{primary_tech}}"</action> </step>
<step n="2" goal="Discover existing documentation and gather user context" if="workflow_mode != deep_dive"> <action>For each part, scan for existing documentation using patterns: - README.md, README.rst, README.txt - CONTRIBUTING.md, CONTRIBUTING.rst - ARCHITECTURE.md, ARCHITECTURE.txt, docs/architecture/ - DEPLOYMENT.md, DEPLOY.md, docs/deployment/ - API.md, docs/api/ - Any files in docs/, documentation/, .github/ folders </action><action>Create inventory of existing_docs with:
<ask>I found these existing documentation files: {{existing_docs_list}}
Are there any other important documents or key areas I should focus on while analyzing this project? [Provide paths or guidance, or type 'none'] </ask>
<action>Store user guidance as {{user_context}}</action>
<template-output>existing_documentation_inventory</template-output> <template-output>user_provided_context</template-output>
<action>Update state file:
<action>PURGE detailed doc contents from memory, keep only: "{{existing_docs_count}} docs found"</action> </step>
<step n="3" goal="Analyze technology stack for each part" if="workflow_mode != deep_dive"> <action>For each part in project_parts: - Load key_file_patterns from documentation_requirements - Scan part root for these patterns - Parse technology manifest files (package.json, go.mod, requirements.txt, etc.) - Extract: framework, language, version, database, dependencies - Build technology_table with columns: Category, Technology, Version, Justification </action><action>Determine architecture pattern based on detected tech stack:
<template-output>technology_stack</template-output> <template-output>architecture_patterns</template-output>
<action>Update state file:
<action>PURGE detailed tech analysis from memory, keep only: "{{framework}} on {{language}}"</action> </step>
<step n="4" goal="Perform conditional analysis based on project type requirements" if="workflow_mode != deep_dive"><critical>BATCHING STRATEGY FOR DEEP/EXHAUSTIVE SCANS</critical>
<check if="scan_level == deep OR scan_level == exhaustive"> <action>This step requires file reading. Apply batching strategy:</action><action>Identify subfolders to process based on: - scan_level == "deep": Use critical_directories from documentation_requirements - scan_level == "exhaustive": Get ALL subfolders recursively (excluding node_modules, .git, dist, build, coverage) </action>
<action>For each subfolder to scan: 1. Read all files in subfolder (consider file size - use judgment for files >5000 LOC) 2. Extract required information based on conditional flags below 3. IMMEDIATELY write findings to appropriate output file 4. Validate written document (section-level validation) 5. Update state file with batch completion 6. PURGE detailed findings from context, keep only 1-2 sentence summary 7. Move to next subfolder </action>
<action>Track batches in state file: findings.batches_completed: [ {"path": "{{subfolder_path}}", "files_scanned": {{count}}, "summary": "{{brief_summary}}"} ] </action> </check>
<check if="scan_level == quick"> <action>Use pattern matching only - do NOT read source files</action> <action>Use glob/grep to identify file locations and patterns</action> <action>Extract information from filenames, directory structure, and config files only</action> </check><action>For each part, check documentation_requirements boolean flags and execute corresponding scans:</action>
<check if="requires_api_scan == true"> <action>Scan for API routes and endpoints using integration_scan_patterns</action> <action>Look for: controllers/, routes/, api/, handlers/, endpoints/</action> <check if="scan_level == quick"> <action>Use glob to find route files, extract patterns from filenames and folder structure</action> </check> <check if="scan_level == deep OR scan_level == exhaustive"> <action>Read files in batches (one subfolder at a time)</action> <action>Extract: HTTP methods, paths, request/response types from actual code</action> </check><action>Build API contracts catalog</action> <action>IMMEDIATELY write to: {project_knowledge}/api-contracts-{part_id}.md</action> <action>Validate document has all required sections</action> <action>Update state file with output generated</action> <action>PURGE detailed API data, keep only: "{{api_count}} endpoints documented"</action> <template-output>api_contracts*{part_id}</template-output> </check>
<check if="requires_data_models == true"> <action>Scan for data models using schema_migration_patterns</action> <action>Look for: models/, schemas/, entities/, migrations/, prisma/, ORM configs</action> <check if="scan_level == quick"> <action>Identify schema files via glob, parse migration file names for table discovery</action> </check> <check if="scan_level == deep OR scan_level == exhaustive"> <action>Read model files in batches (one subfolder at a time)</action> <action>Extract: table names, fields, relationships, constraints from actual code</action> </check><action>Build database schema documentation</action> <action>IMMEDIATELY write to: {project_knowledge}/data-models-{part_id}.md</action> <action>Validate document completeness</action> <action>Update state file with output generated</action> <action>PURGE detailed schema data, keep only: "{{table_count}} tables documented"</action> <template-output>data_models*{part_id}</template-output> </check>
<check if="requires_state_management == true"> <action>Analyze state management patterns</action> <action>Look for: Redux, Context API, MobX, Vuex, Pinia, Provider patterns</action> <action>Identify: stores, reducers, actions, state structure</action> <template-output>state_management_patterns_{part_id}</template-output> </check> <check if="requires_ui_components == true"> <action>Inventory UI component library</action> <action>Scan: components/, ui/, widgets/, views/ folders</action> <action>Categorize: Layout, Form, Display, Navigation, etc.</action> <action>Identify: Design system, component patterns, reusable elements</action> <template-output>ui_component_inventory_{part_id}</template-output> </check> <check if="requires_hardware_docs == true"> <action>Look for hardware schematics using hardware_interface_patterns</action> <ask>This appears to be an embedded/hardware project. Do you have: - Pinout diagrams - Hardware schematics - PCB layouts - Hardware documentationIf yes, please provide paths or links. [Provide paths or type 'none'] </ask> <action>Store hardware docs references</action> <template-output>hardwaredocumentation{part_id}</template-output> </check>
<check if="requires_asset_inventory == true"> <action>Scan and catalog assets using asset_patterns</action> <action>Categorize by: Images, Audio, 3D Models, Sprites, Textures, etc.</action> <action>Calculate: Total size, file counts, formats used</action> <template-output>asset_inventory_{part_id}</template-output> </check><action>Scan for additional patterns based on doc requirements:
<action>Apply scan_level strategy to each pattern scan (quick=glob only, deep/exhaustive=read files)</action>
<template-output>comprehensiveanalysis{part_id}</template-output>
<action>Update state file:
<action>PURGE all detailed scan results from context. Keep only summaries:
<action>Annotate the tree with:
<action if="multi-part project">Show how parts are organized and where they interface</action>
<action>Create formatted source tree with descriptions:
project-root/
├── client/ # React frontend (Part: client)
│ ├── src/
│ │ ├── components/ # Reusable UI components
│ │ ├── pages/ # Route-based pages
│ │ └── api/ # API client layer → Calls server/
├── server/ # Express API backend (Part: api)
│ ├── src/
│ │ ├── routes/ # REST API endpoints
│ │ ├── models/ # Database models
│ │ └── services/ # Business logic
<template-output>source_tree_analysis</template-output> <template-output>critical_folders_summary</template-output>
<action>IMMEDIATELY write source-tree-analysis.md to disk</action> <action>Validate document structure</action> <action>Update state file:
<action>Look for deployment configuration using ci_cd_patterns:
<template-output>development_instructions</template-output> <template-output>deployment_configuration</template-output> <template-output>contribution_guidelines</template-output>
<action>Update state file:
<action>Create integration_points array with:
<action>IMMEDIATELY write integration-architecture.md to disk</action> <action>Validate document completeness</action>
<template-output>integration_architecture</template-output>
<action>Update state file:
<action>For each architecture file generated:
<template-output>architecture_document</template-output>
<action>Update state file:
<action>Generate source-tree-analysis.md with:
<action>IMMEDIATELY write project-overview.md to disk</action> <action>Validate document sections</action>
<action>Generate source-tree-analysis.md (if not already written in Step 5)</action> <action>IMMEDIATELY write to disk and validate</action>
<action>Generate component-inventory.md (or per-part versions) with:
<action>Generate development-guide.md (or per-part versions) with:
<action>Generate project-parts.json metadata file:
json { "repository_type": "monorepo", "parts": [ ... ], "integration_points": [ ... ] }
</action>
<action>IMMEDIATELY write to disk</action>
</action>
<template-output>supporting_documentation</template-output>
<action>Update state file:
<action>PURGE all document contents from context, keep only list of files generated</action> </step>
<step n="10" goal="Generate master index as primary AI retrieval source" if="workflow_mode != deep_dive"><critical>INCOMPLETE DOCUMENTATION MARKER CONVENTION: When a document SHOULD be generated but wasn't (due to quick scan, missing data, conditional requirements not met):
<action>Create index.md with intelligent navigation based on project structure</action>
<action if="single part project"> <action>Generate simple index with: - Project name and type - Quick reference (tech stack, architecture type) - Links to all generated docs - Links to discovered existing docs - Getting started section </action> </action> <action if="multi-part project"> <action>Generate comprehensive index with: - Project overview and structure summary - Part-based navigation section - Quick reference by part - Cross-part integration links - Links to all generated and existing docs - Getting started per part </action> </action><action>Include in index.md:
{{#if single_part}}
{{#each existing_docs}}
{{getting_started_instructions}} </action>
<action>Before writing index.md, check which expected files actually exist:
<action>IMMEDIATELY write index.md to disk with appropriate (To be generated) markers for missing files</action> <action>Validate index has all required sections and links are valid</action>
<template-output>index</template-output>
<action>Update state file:
<action>PURGE index content from context</action> </step>
<step n="11" goal="Validate and review generated documentation" if="workflow_mode != deep_dive"> <action>Show summary of all generated files: Generated in {{project_knowledge}}/: {{file_list_with_sizes}} </action><action>Run validation checklist from ../checklist.md</action>
<critical>INCOMPLETE DOCUMENTATION DETECTION:
<action>Read {project_knowledge}/index.md</action>
<action>Scan for incomplete documentation markers: Step 1: Search for exact pattern "(To be generated)" (case-sensitive) Step 2: For each match found, extract the entire line Step 3: Parse line to extract:
<action>Fallback fuzzy scan for alternate markers: Search for patterns: (TBD), (TODO), (Coming soon), (Not yet generated), (Pending) For each fuzzy match:
<action>Combine results: Set {{incomplete_docs_list}} = {{incomplete_docs_strict}} + {{incomplete_docs_fuzzy}} For each item store structure: { "title": "Architecture – Server", "file*path": "./architecture-server.md", "doc_type": "architecture", "part_id": "server", "line_text": "- Architecture – Server (To be generated)", "fuzzy_match": false } </action>
<ask>Documentation generation complete!
Summary:
{{#if incomplete_docs_list.length > 0}} ⚠️ Incomplete Documentation Detected:
I found {{incomplete_docs_list.length}} item(s) marked as incomplete:
{{#each incomplete_docs_list}} {{@index + 1}}. {{title}} ({{doc_type}}{{#if part_id}} for {{part_id}}{{/if}}){{#if fuzzy_match}} ⚠️ [non-standard marker]{{/if}} {{/each}}
{{/if}}
Would you like to:
{{#if incomplete_docs_list.length > 0}}
Your choice: </ask>
<check if="user selects option 1 (generate incomplete)"> <ask>Which incomplete items would you like to generate?{{#each incomplete_docs_list}} {{@index + 1}}. {{title}} ({{doc_type}}{{#if part_id}} - {{part_id}}{{/if}}) {{/each}} {{incomplete_docs_list.length + 1}}. All of them
Enter number(s) separated by commas (e.g., "1,3,5"), or type 'all': </ask>
<action>Parse user selection:
If "all", set {{selected_items}} = all items in {{incomplete_docs_list}}
If comma-separated numbers, extract selected items by index
Store result in {{selected_items}} array </action>
<action>Display: "Generating {{selected_items.length}} document(s)..."</action>
<action>For each item in {{selected_items}}:
Identify the part and requirements:
Route to appropriate generation substep based on doc_type:
If doc_type == "architecture":
If doc_type == "api-contracts":
If doc_type == "data-models":
If doc_type == "component-inventory":
If doc_type == "development-guide":
If doc_type == "deployment-guide":
If doc_type == "integration-architecture":
Post-generation actions:
Handle errors:
<action>After all selected items are processed:
Update index.md to remove markers:
<action>Display generation summary:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✓ Documentation Generation Complete!
Successfully Generated: {{#each newly_generated_docs}}
{{#if failed_generations.length > 0}} Failed to Generate: {{#each failed_generations}}
Updated: index.md (removed incomplete markers)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ </action>
<action>Update state file with all generation activities</action>
<action>Return to Step 11 menu (loop back to check for any remaining incomplete items)</action> </check>
<action if="user requests other changes (options 2-3)">Make requested modifications and regenerate affected files</action> <action if="user selects finalize (option 4 or 5)">Proceed to Step 12 completion</action>
<check if="not finalizing"> <action>Update state file: - Add to completed_steps: {"step": "step_11_iteration", "status": "completed", "timestamp": "{{now}}", "summary": "Review iteration complete"} - Keep current_step = "step_11" (for loop back) - Update last_updated timestamp </action> <action>Loop back to beginning of Step 11 (re-scan for remaining incomplete docs)</action> </check> <check if="finalizing"> <action>Update state file: - Add to completed_steps: {"step": "step_11", "status": "completed", "timestamp": "{{now}}", "summary": "Validation and review complete"} - Update current_step = "step_12" </action> <action>Proceed to Step 12</action> </check> </step> <step n="12" goal="Finalize and provide next steps" if="workflow_mode != deep_dive"> <action>Create final summary report</action> <action>Compile verification recap variables: - Set {{verification_summary}} to the concrete tests, validations, or scripts you executed (or "none run"). - Set {{open_risks}} to any remaining risks or TODO follow-ups (or "none"). - Set {{next_checks}} to recommended actions before merging/deploying (or "none"). </action><action>Display completion message:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Location: {{project_knowledge}}/
Master Index: {{project_knowledge}}/index.md 👆 This is your primary entry point for AI-assisted development
Generated Documentation: {{generated_files_list}}
Next Steps:
Verification Recap:
Brownfield PRD Command: When ready to plan new features, run the PRD workflow and provide this index as input.
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ </action>
<action>FINALIZE state file:
<action>Display: "State file saved: {{project_knowledge}}/project-scan-report.json"</action>
<action>Run: python3 {project-root}/_bmad/scripts/resolve_customization.py --skill {skill-root} --key workflow.on_complete — if the resolved value is non-empty, follow it as the final terminal instruction before exiting.</action>