docs/provider-config/qwen.mdx
Alibaba Cloud's Qwen (通义千问) is a comprehensive family of AI models offering everything from compact 0.6B parameter models to massive 235B MoE models. Qwen3 series features hybrid thinking capabilities and strong coding performance.
Website: https://bailian.console.aliyun.com/
Cline supports Qwen models with separate catalogs for International and China regions:
qwen3-coder-plus - High-performance coding model with 1M context ($1.00/$5.00 per 1M tokens)qwen3-coder-480b-a35b-instruct - 480B MoE coding model with 204K context ($1.50/$7.50 per 1M tokens)qwen3-235b-a22b - 235B MoE model with thinking support (131K context, $2.00/$8.00 per 1M tokens)qwen3-32b - Dense 32B model with thinking (131K context)qwen3-30b-a3b - Compact 30B MoE with thinking (131K context)qwen3-14b - Mid-size model with thinking (131K context)qwen3-8b - Efficient model with thinking (131K context)qwen3-4b - Compact model with thinking (131K context)qwen3-1.7b - Small model with thinking (32K context)qwen3-0.6b - Ultra-compact model with thinking (32K context)qwen2.5-coder-32b-instruct through qwen2.5-coder-0.5b-instruct - Range of coding-optimized modelsqwen-coder-plus / qwen-coder-plus-latest - API-hosted coder modelsqwen-plus / qwen-plus-latest - General-purpose API models with thinkingqwen-turbo / qwen-turbo-latest - Fast API models with 1M contextqwen-max / qwen-max-latest - Maximum capability API modelsqwen-vl-max / qwen-vl-max-latest - Vision-language max models with image supportqwen-vl-plus / qwen-vl-plus-latest - Vision-language plus modelsdeepseek-v3 - DeepSeek V3 hosted on Qwen infrastructuredeepseek-r1 - DeepSeek R1 reasoning modelQwen3 models support hybrid thinking with configurable thinking budgets. When enabled, models generate step-by-step reasoning before providing answers, improving performance on complex coding and math tasks.