docs/providers/mistral.md
OpenClaw supports Mistral for both text/image model routing (mistral/...) and
audio transcription via Voxtral in media understanding.
Mistral can also be used for memory embeddings (memorySearch.provider = "mistral").
mistralMISTRAL_API_KEYhttps://api.mistral.ai/v1)Or pass the key directly:
```bash
openclaw onboard --mistral-api-key "$MISTRAL_API_KEY"
```
OpenClaw currently ships this bundled Mistral catalog:
| Model ref | Input | Context | Max output | Notes |
|---|---|---|---|---|
mistral/mistral-large-latest | text, image | 262,144 | 16,384 | Default model |
mistral/mistral-medium-2508 | text, image | 262,144 | 8,192 | Mistral Medium 3.1 |
mistral/mistral-small-latest | text, image | 128,000 | 16,384 | Mistral Small 4; adjustable reasoning via API reasoning_effort |
mistral/pixtral-large-latest | text, image | 128,000 | 32,768 | Pixtral |
mistral/codestral-latest | text | 256,000 | 4,096 | Coding |
mistral/devstral-medium-latest | text | 262,144 | 32,768 | Devstral 2 |
mistral/magistral-small | text | 128,000 | 40,000 | Reasoning-enabled |
Use Voxtral for batch audio transcription through the media understanding pipeline.
{
tools: {
media: {
audio: {
enabled: true,
models: [{ provider: "mistral", model: "voxtral-mini-latest" }],
},
},
},
}
The bundled mistral plugin registers Voxtral Realtime as a Voice Call
streaming STT provider.
| Setting | Config path | Default |
|---|---|---|
| API key | plugins.entries.voice-call.config.streaming.providers.mistral.apiKey | Falls back to MISTRAL_API_KEY |
| Model | ...mistral.model | voxtral-mini-transcribe-realtime-2602 |
| Encoding | ...mistral.encoding | pcm_mulaw |
| Sample rate | ...mistral.sampleRate | 8000 |
| Target delay | ...mistral.targetStreamingDelayMs | 800 |
{
plugins: {
entries: {
"voice-call": {
config: {
streaming: {
enabled: true,
provider: "mistral",
providers: {
mistral: {
apiKey: "${MISTRAL_API_KEY}",
targetStreamingDelayMs: 800,
},
},
},
},
},
},
},
}
OpenClaw maps the session **thinking** level to Mistral's API:
| OpenClaw thinking level | Mistral `reasoning_effort` |
| ------------------------------------------------ | -------------------------- |
| **off** / **minimal** | `none` |
| **low** / **medium** / **high** / **xhigh** / **adaptive** / **max** | `high` |
<Note>
Other bundled Mistral catalog models do not use this parameter. Keep using `magistral-*` models when you want Mistral's native reasoning-first behavior.
</Note>
```json5
{
memorySearch: { provider: "mistral" },
}
```