v3/implementation/adrs/ADR-011-llm-provider-system.md
Implemented ✅
2026-01-05
2026-01-05
V3 needs a unified LLM provider system that:
V2 has concrete provider implementations in v2/src/providers/ that need to be modernized for V3.
@claude-flow/providers PackageA dedicated package for LLM provider implementations:
v3/@claude-flow/providers/
├── src/
│ ├── types.ts # Unified type definitions
│ ├── base-provider.ts # Abstract base class with circuit breaker
│ ├── anthropic-provider.ts # Claude models
│ ├── openai-provider.ts # GPT models (+ OpenRouter support)
│ ├── google-provider.ts # Gemini models
│ ├── cohere-provider.ts # Command models
│ ├── ollama-provider.ts # Local models
│ ├── ruvector-provider.ts # RuVector/ruvLLM with SONA learning
│ ├── provider-manager.ts # Orchestration layer
│ ├── __tests__/ # Integration tests
│ │ └── quick-test.ts # Provider test suite
│ └── index.ts # Public exports
├── package.json
└── tsconfig.json
All providers implement a unified interface:
interface ILLMProvider {
readonly name: LLMProvider;
readonly capabilities: ProviderCapabilities;
initialize(): Promise<void>;
complete(request: LLMRequest): Promise<LLMResponse>;
streamComplete(request: LLMRequest): AsyncIterable<LLMStreamEvent>;
healthCheck(): Promise<HealthCheckResult>;
estimateCost(request: LLMRequest): Promise<CostEstimate>;
destroy(): void;
}
Add LLM-specific hooks to @claude-flow/hooks:
// Pre-LLM hooks
- Request caching lookup
- Provider-specific optimizations
- Cost constraint validation
// Post-LLM hooks
- Response caching
- Pattern learning
- Cost tracking
- Performance metrics
Include latest models:
Anthropic (Claude):
OpenAI:
OpenRouter (via OpenAI provider):
Google:
Cohere:
Ollama (Local):
RuVector/ruvLLM (Self-Learning Local):
@claude-flow/integration multi-model-routerTest Date: 2026-01-05
| Provider | Model | Status | Notes |
|---|---|---|---|
| Anthropic | claude-3-haiku-20240307 | ✅ Pass | Full API integration |
| gemini-2.0-flash | ✅ Pass | Free tier, streaming support | |
| OpenRouter | openai/gpt-4o-mini | ✅ Pass | Via OpenAI-compatible API |
| Ollama | qwen2.5:0.5b | ✅ Pass | Local CPU-friendly model |
| RuVector | qwen2.5:0.5b | ✅ Pass | Ollama fallback working |
| Manager | Multi-provider | ✅ Pass | Load balancing + 0ms cache |
All 6 providers passing validation.
BaseProvider Features:
RuVector Provider:
/query endpoint)Provider Manager:
v2/src/providers/v3/@claude-flow/integration/src/multi-model-router.tshttps://github.com/ruvnet/ruvector/tree/main/examples/ruvLLM