v2/benchmark/examples/README.md
This directory contains organized examples for using the Claude Flow benchmark suite with different strategies, coordination modes, and real-world scenarios.
examples/
├── basic/ # Simple examples for getting started
├── advanced/ # Complex examples with advanced features
├── real/ # Real claude-flow execution examples
├── cli/ # Command-line interface examples
└── output/ # Generated results and metrics
basic/)Getting started with simple benchmarks:
simple_swarm.py - Basic swarm coordination benchmarksimple_hive_mind.py - Basic hive-mind collective intelligencesimple_sparc.py - Basic SPARC methodology (TDD approach)claude_optimizer_example.py - Claude optimizer usageexample_usage.py - General usage patternsRun a basic example:
cd basic/
python3 simple_swarm.py
advanced/)Complex benchmarks with advanced features:
parallel_benchmarks.py - Concurrent execution strategiesoptimization_suite.py - Performance tuning and efficiency analysiscomparative_analysis.py - Multi-strategy comparisondemo_comprehensive.py - Comprehensive feature demonstrationparallel_benchmark_demo.py - Parallel execution patternsRun an advanced example:
cd advanced/
python3 parallel_benchmarks.py
real/)Production-ready benchmarks with actual claude-flow execution:
real_swarm_benchmark.py - Real swarm execution with comprehensive metricsreal_token_tracking.py - Token consumption analysis and cost optimizationreal_performance.py - System performance monitoring and analysisreal_hive_mind_benchmark.py - Real hive-mind collective intelligencereal_sparc_benchmark.py - Real SPARC methodology executionreal_benchmark_examples.py - Various real benchmark scenariosRun a real example:
cd real/
python3 real_swarm_benchmark.py
cli/)Command-line interface demonstrations:
cli_examples.sh - Comprehensive CLI usage examplesbatch_benchmarks.sh - Batch execution scriptsRun CLI examples:
cd cli/
./cli_examples.sh
Run batch benchmarks:
cd cli/
./batch_benchmarks.sh
python3 basic/simple_swarm.py
python3 real/real_performance.py
python3 real/real_token_tracking.py
./cli/cli_examples.sh
basic/simple_*.py - Start here for learningcli/cli_examples.sh - Command-line referenceadvanced/optimization_suite.py - Performance tuningreal/real_performance.py - System monitoringreal/real_token_tracking.py - Cost optimizationreal/real_swarm_benchmark.py - Production readinessadvanced/comparative_analysis.py - Strategy comparisoncli/batch_benchmarks.sh - Automated testingadvanced/parallel_benchmarks.py - Concurrent executionreal/real_hive_mind_benchmark.py - Collective intelligenceadvanced/comparative_analysis.py - Multi-methodology comparisonAll examples save results to the output/ directory with timestamps:
output/
├── simple_swarm_metrics.json
├── parallel_benchmark_results.json
├── token_tracking_metrics_*.json
├── performance_analysis_*.json
└── batch_results_*/
Python Dependencies:
pip install psutil # For system monitoring
Claude Flow:
npm install -g claude-flow@alpha
Benchmark Suite:
pip install -e . # From benchmark root directory
Most examples can be configured by modifying parameters at the top of each script:
# Example configuration
config = {
"agents": 5,
"coordination": "hierarchical",
"strategy": "development",
"timeout": 180
}
basic/ examplesreal/real_performance.py for system analysisreal/real_token_tracking.py for cost optimizationadvanced/comparative_analysis.pycli/batch_benchmarks.shUse batch scripts for automated testing:
# GitHub Actions example
- name: Run Benchmark Suite
run: |
cd benchmark/examples/cli/
./batch_benchmarks.sh
- name: Upload Results
uses: actions/upload-artifact@v3
with:
name: benchmark-results
path: benchmark/examples/output/
Common Issues:
claude-flow@alpha is installed globallychmod +x cli/*.shpip install -e . from benchmark rootDebug Mode:
Add --debug flag to commands for verbose output:
python3 real/real_swarm_benchmark.py --debug
To add new examples:
basic/, advanced/, real/, cli/){purpose}_{type}_{description}.pyoutput/ directory with timestamps/workspaces/claude-code-flow/benchmark/docs/Next Steps: