docs/providers/openrouter.md
OpenRouter provides a unified API that routes requests to many models behind a single endpoint and API key. It is OpenAI-compatible, so most OpenAI SDKs work by switching the base URL.
```bash
openclaw models set openrouter/<provider>/<model>
```
{
env: { OPENROUTER_API_KEY: "sk-or-..." },
agents: {
defaults: {
model: { primary: "openrouter/auto" },
},
},
}
Bundled fallback examples:
| Model ref | Notes |
|---|---|
openrouter/auto | OpenRouter automatic routing |
openrouter/moonshotai/kimi-k2.6 | Kimi K2.6 via MoonshotAI |
OpenRouter can also back the image_generate tool. Use an OpenRouter image model under agents.defaults.imageGenerationModel:
{
env: { OPENROUTER_API_KEY: "sk-or-..." },
agents: {
defaults: {
imageGenerationModel: {
primary: "openrouter/google/gemini-3.1-flash-image-preview",
timeoutMs: 180_000,
},
},
},
}
OpenClaw sends image requests to OpenRouter's chat completions image API with modalities: ["image", "text"]. Gemini image models receive supported aspectRatio and resolution hints through OpenRouter's image_config. Use agents.defaults.imageGenerationModel.timeoutMs for slower OpenRouter image models; the image_generate tool's per-call timeoutMs parameter still wins.
OpenRouter can also back the video_generate tool through its asynchronous /videos API. Use an OpenRouter video model under agents.defaults.videoGenerationModel:
{
env: { OPENROUTER_API_KEY: "sk-or-..." },
agents: {
defaults: {
videoGenerationModel: {
primary: "openrouter/google/veo-3.1-fast",
},
},
},
}
OpenClaw submits text-to-video and image-to-video jobs to OpenRouter, polls
the returned polling_url, and downloads the completed video from
OpenRouter's unsigned_urls or the documented job content endpoint.
Reference images are sent as first/last frame images by default; images
tagged with reference_image are sent as OpenRouter input references. The
bundled google/veo-3.1-fast default advertises the currently supported 4/6/8
second durations, 720P/1080P resolutions, and 16:9/9:16 aspect
ratios. Video-to-video is not registered for OpenRouter because the upstream
video generation API currently accepts text and image references.
OpenRouter can also be used as a TTS provider through its OpenAI-compatible
/audio/speech endpoint.
{
messages: {
tts: {
auto: "always",
provider: "openrouter",
providers: {
openrouter: {
model: "hexgrad/kokoro-82m",
voice: "af_alloy",
responseFormat: "mp3",
},
},
},
},
}
If messages.tts.providers.openrouter.apiKey is omitted, TTS reuses
models.providers.openrouter.apiKey, then OPENROUTER_API_KEY.
OpenRouter uses a Bearer token with your API key under the hood.
On real OpenRouter requests (https://openrouter.ai/api/v1), OpenClaw also adds
OpenRouter's documented app-attribution headers:
| Header | Value |
|---|---|
HTTP-Referer | https://openclaw.ai |
X-OpenRouter-Title | OpenClaw |
X-OpenRouter-Categories | cli-agent,cloud-agent,programming-app,creative-writing,writing-assistant,general-chat,personal-agent |
```json5
{
agents: {
defaults: {
models: {
"openrouter/auto": {
params: {
responseCache: true,
responseCacheTtlSeconds: 300,
},
},
},
},
},
}
```
OpenClaw sends `X-OpenRouter-Cache: true` and, when configured,
`X-OpenRouter-Cache-TTL`. `responseCacheClear: true` forces a refresh for
the current request and stores the replacement response. Snake_case aliases
(`response_cache`, `response_cache_ttl_seconds`, and
`response_cache_clear`) are also accepted.
This is separate from provider prompt caching and from OpenRouter's
Anthropic `cache_control` markers. It is only applied on verified
`openrouter.ai` routes, not custom proxy base URLs.