docs/providers/qwen.md
Qwen OAuth has been removed. The free-tier OAuth integration
(qwen-portal) that used portal.qwen.ai endpoints is no longer available.
See Issue #49557 for
background.
OpenClaw now treats Qwen as a first-class bundled provider with canonical id
qwen. The bundled provider targets the Qwen Cloud / Alibaba DashScope and
Coding Plan endpoints and keeps legacy modelstudio ids working as a
compatibility alias.
qwenQWEN_API_KEYMODELSTUDIO_API_KEY, DASHSCOPE_API_KEYChoose your plan type and follow the setup steps.
<Tabs> <Tab title="Coding Plan (subscription)"> **Best for:** subscription-based access through the Qwen Coding Plan.<Steps>
<Step title="Get your API key">
Create or copy an API key from [home.qwencloud.com/api-keys](https://home.qwencloud.com/api-keys).
</Step>
<Step title="Run onboarding">
For the **Global** endpoint:
```bash
openclaw onboard --auth-choice qwen-api-key
```
For the **China** endpoint:
```bash
openclaw onboard --auth-choice qwen-api-key-cn
```
</Step>
<Step title="Set a default model">
```json5
{
agents: {
defaults: {
model: { primary: "qwen/qwen3.5-plus" },
},
},
}
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider qwen
```
</Step>
</Steps>
<Note>
Legacy `modelstudio-*` auth-choice ids and `modelstudio/...` model refs still
work as compatibility aliases, but new setup flows should prefer the canonical
`qwen-*` auth-choice ids and `qwen/...` model refs. If you define an exact
custom `models.providers.modelstudio` entry with another `api` value, that
custom provider owns `modelstudio/...` refs instead of the Qwen compatibility
alias.
</Note>
<Steps>
<Step title="Get your API key">
Create or copy an API key from [home.qwencloud.com/api-keys](https://home.qwencloud.com/api-keys).
</Step>
<Step title="Run onboarding">
For the **Global** endpoint:
```bash
openclaw onboard --auth-choice qwen-standard-api-key
```
For the **China** endpoint:
```bash
openclaw onboard --auth-choice qwen-standard-api-key-cn
```
</Step>
<Step title="Set a default model">
```json5
{
agents: {
defaults: {
model: { primary: "qwen/qwen3.5-plus" },
},
},
}
```
</Step>
<Step title="Verify the model is available">
```bash
openclaw models list --provider qwen
```
</Step>
</Steps>
<Note>
Legacy `modelstudio-*` auth-choice ids and `modelstudio/...` model refs still
work as compatibility aliases, but new setup flows should prefer the canonical
`qwen-*` auth-choice ids and `qwen/...` model refs. If you define an exact
custom `models.providers.modelstudio` entry with another `api` value, that
custom provider owns `modelstudio/...` refs instead of the Qwen compatibility
alias.
</Note>
| Plan | Region | Auth choice | Endpoint |
|---|---|---|---|
| Standard (pay-as-you-go) | China | qwen-standard-api-key-cn | dashscope.aliyuncs.com/compatible-mode/v1 |
| Standard (pay-as-you-go) | Global | qwen-standard-api-key | dashscope-intl.aliyuncs.com/compatible-mode/v1 |
| Coding Plan (subscription) | China | qwen-api-key-cn | coding.dashscope.aliyuncs.com/v1 |
| Coding Plan (subscription) | Global | qwen-api-key | coding-intl.dashscope.aliyuncs.com/v1 |
The provider auto-selects the endpoint based on your auth choice. Canonical
choices use the qwen-* family; modelstudio-* remains compatibility-only.
You can override with a custom baseUrl in config.
OpenClaw currently ships this bundled Qwen catalog. The configured catalog is endpoint-aware: Coding Plan configs omit models that are only known to work on the Standard endpoint.
| Model ref | Input | Context | Notes |
|---|---|---|---|
qwen/qwen3.5-plus | text, image | 1,000,000 | Default model |
qwen/qwen3.6-plus | text, image | 1,000,000 | Prefer Standard endpoints when you need this model |
qwen/qwen3-max-2026-01-23 | text | 262,144 | Qwen Max line |
qwen/qwen3-coder-next | text | 262,144 | Coding |
qwen/qwen3-coder-plus | text | 1,000,000 | Coding |
qwen/MiniMax-M2.5 | text | 1,000,000 | Reasoning enabled |
qwen/glm-5 | text | 202,752 | GLM |
qwen/glm-4.7 | text | 202,752 | GLM |
qwen/kimi-k2.5 | text, image | 262,144 | Moonshot AI via Alibaba |
For reasoning-enabled Qwen Cloud models, the bundled provider maps OpenClaw
thinking levels to DashScope's top-level enable_thinking request flag. Disabled
thinking sends enable_thinking: false; other thinking levels send
enable_thinking: true.
The qwen plugin also exposes multimodal capabilities on the Standard
DashScope endpoints (not the Coding Plan endpoints):
qwen-vl-max-latestwan2.6-t2v (default), wan2.6-i2v, wan2.6-r2v, wan2.6-r2v-flash, wan2.7-r2vTo use Qwen as the default video provider:
{
agents: {
defaults: {
videoGenerationModel: { primary: "qwen/wan2.6-t2v" },
},
},
}
| Property | Value |
| ------------- | --------------------- |
| Model | `qwen-vl-max-latest` |
| Supported input | Images, video |
Media understanding is auto-resolved from the configured Qwen auth — no
additional config is needed. Ensure you are using a Standard (pay-as-you-go)
endpoint for media understanding support.
- China: `dashscope.aliyuncs.com/compatible-mode/v1`
- Global: `dashscope-intl.aliyuncs.com/compatible-mode/v1`
If the Coding Plan endpoints return an "unsupported model" error for
`qwen3.6-plus`, switch to Standard (pay-as-you-go) instead of the Coding Plan
endpoint/key pair.
OpenClaw's bundled Qwen catalog does not advertise `qwen3.6-plus` on Coding
Plan endpoints, but explicitly configured `qwen/qwen3.6-plus` entries under
`models.providers.qwen.models` are honored on Coding Plan baseUrls so you
can opt that model in if Aliyun enables it on your subscription. The
upstream API still decides whether the call succeeds.
- **Text/chat models:** bundled now
- **Tool calling, structured output, thinking:** inherited from the OpenAI-compatible transport
- **Image generation:** planned at the provider-plugin layer
- **Image/video understanding:** bundled now on the Standard endpoint
- **Speech/audio:** planned at the provider-plugin layer
- **Memory embeddings/reranking:** planned through the embedding adapter surface
- **Video generation:** bundled now through the shared video-generation capability
- Global/Intl: `https://dashscope-intl.aliyuncs.com`
- China: `https://dashscope.aliyuncs.com`
That means a normal `models.providers.qwen.baseUrl` pointing at either the
Coding Plan or Standard Qwen hosts still keeps video generation on the correct
regional DashScope video endpoint.
Current bundled Qwen video-generation limits:
- Up to **1** output video per request
- Up to **1** input image
- Up to **4** input videos
- Up to **10 seconds** duration
- Supports `size`, `aspectRatio`, `resolution`, `audio`, and `watermark`
- Reference image/video mode currently requires **remote http(s) URLs**. Local
file paths are rejected up front because the DashScope video endpoint does not
accept uploaded local buffers for those references.
Native-streaming usage compatibility applies to both the Coding Plan hosts and
the Standard DashScope-compatible hosts:
- `https://coding.dashscope.aliyuncs.com/v1`
- `https://coding-intl.dashscope.aliyuncs.com/v1`
- `https://dashscope.aliyuncs.com/compatible-mode/v1`
- `https://dashscope-intl.aliyuncs.com/compatible-mode/v1`
- Global/Intl Standard base URL: `https://dashscope-intl.aliyuncs.com/compatible-mode/v1`
- China Standard base URL: `https://dashscope.aliyuncs.com/compatible-mode/v1`