docs/providers/fireworks.md
Fireworks exposes open-weight and routed models through an OpenAI-compatible API. OpenClaw includes a bundled Fireworks provider plugin that ships with two pre-cataloged Kimi models and accepts any Fireworks model or router id at runtime.
| Property | Value |
|---|---|
| Provider id | fireworks (alias: fireworks-ai) |
| Plugin | bundled, enabledByDefault: true |
| Auth env var | FIREWORKS_API_KEY |
| Onboarding flag | --auth-choice fireworks-api-key |
| Direct CLI flag | --fireworks-api-key <key> |
| API | OpenAI-compatible (openai-completions) |
| Base URL | https://api.fireworks.ai/inference/v1 |
| Default model | fireworks/accounts/fireworks/routers/kimi-k2p5-turbo |
| Default alias | Kimi K2.5 Turbo |
openclaw onboard --auth-choice fireworks-api-key
openclaw onboard --non-interactive \
--auth-choice fireworks-api-key \
--fireworks-api-key "$FIREWORKS_API_KEY"
export FIREWORKS_API_KEY=fw-...
</CodeGroup>
Onboarding stores the key against the `fireworks` provider in your auth profiles and sets the **Fire Pass** Kimi K2.5 Turbo router as the default model.
The list should include `Kimi K2.6` and `Kimi K2.5 Turbo (Fire Pass)`. If `FIREWORKS_API_KEY` is unresolved, `openclaw models status --json` reports the missing credential under `auth.unusableProfiles`.
For scripted or CI installs, pass everything on the command line:
openclaw onboard --non-interactive \
--mode local \
--auth-choice fireworks-api-key \
--fireworks-api-key "$FIREWORKS_API_KEY" \
--skip-health \
--accept-risk
| Model ref | Name | Input | Context | Max output | Thinking |
|---|---|---|---|---|---|
fireworks/accounts/fireworks/models/kimi-k2p6 | Kimi K2.6 | text + image | 262,144 | 262,144 | Forced off |
fireworks/accounts/fireworks/routers/kimi-k2p5-turbo | Kimi K2.5 Turbo (Fire Pass) | text + image | 256,000 | 256,000 | Forced off (default) |
OpenClaw accepts any Fireworks model or router id at runtime. Use the exact id shown by Fireworks and prefix it with fireworks/. Dynamic resolution clones the Fire Pass template (text + image input, OpenAI-compatible API, default cost zero) and disables thinking automatically when the id matches the Kimi pattern.
{
agents: {
defaults: {
model: {
primary: "fireworks/accounts/fireworks/models/<your-model-id>",
},
},
},
}
- Router model: `fireworks/accounts/fireworks/routers/kimi-k2p5-turbo`
- Direct model: `fireworks/accounts/fireworks/models/<model-name>`
OpenClaw strips the `fireworks/` prefix when constructing the API request and sends the remaining path to the Fireworks endpoint as the OpenAI-compatible `model` field.
To use Kimi reasoning end-to-end, configure the [Moonshot provider](/providers/moonshot) and route the same model through it.
<Warning>
A key sitting only in `~/.profile` will not help a launchd or systemd daemon unless that environment is imported there too. Set the key in `~/.openclaw/.env` or via `env.shellEnv` to make it readable from the gateway process.
</Warning>
On macOS, `openclaw gateway install` already wires `~/.openclaw/.env` into the LaunchAgent environment file. Re-run install (or `openclaw doctor --fix`) after rotating the key.