plugins/ruflo-intelligence/skills/intelligence-route/SKILL.md
Pick the optimal agent + model tier for a task using learned patterns + the 3-tier router. Emits a hooks_explain rationale so the choice is auditable.
Before starting any non-trivial task. Replaces manual agent selection with data-driven decisions.
mcp__claude-flow__hooks_route with the task description. Returns { recommended, confidence, reasoning }.mcp__claude-flow__hooks_model-route for Haiku/Sonnet/Opus selection.mcp__claude-flow__hooks_intelligence_pattern-search to find prior successes.mcp__claude-flow__neural_predict with the task description for a confidence-scored prediction.--why was passed) — call mcp__claude-flow__hooks_explain to surface the routing rationale to the user.mcp__claude-flow__hooks_model-outcome with success: true|false to train the router.| Tier | Handler | Latency | Cost | When |
|---|---|---|---|---|
| 1 | Agent Booster (WASM) | <1ms | $0 | Simple transforms (var→const, add types, remove console) — skip LLM entirely |
| 2 | Haiku | ~500ms | ~$0.0002 | Low complexity (<30%), bug fixes, quick patches |
| 3 | Sonnet/Opus | 2–5s | $0.003–$0.015 | Complex reasoning, architecture, security, multi-file refactors |
When hooks_route returns [AGENT_BOOSTER_AVAILABLE] for an intent type (var-to-const, add-types, add-error-handling, async-await, add-logging, remove-console), skip the LLM and use the Edit tool directly.
Closing the routing loop is mandatory:
# Success
mcp tool call hooks_model-outcome --json -- '{"taskId": "T123", "success": true, "model": "haiku"}'
# Failure with reason
mcp tool call hooks_model-outcome --json -- '{"taskId": "T123", "success": false, "model": "haiku", "reason": "complexity-misjudged"}'
The router learns from these calls. Skipping them = no learning.
npx @claude-flow/cli@latest hooks route --task "description"
npx @claude-flow/cli@latest hooks pre-task --description "description"
npx @claude-flow/cli@latest hooks explain --topic "routing decision"