content/providers/03-community-providers/34-qwen.mdx
younis-ahmed/qwen-ai-provider is a community provider that uses Qwen to provide language model support for the AI SDK.
The Qwen provider is available in the qwen-ai-provider module. You can install it with
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add qwen-ai-provider" dark /> </Tab> <Tab> <Snippet text="npm install qwen-ai-provider" dark /> </Tab> <Tab> <Snippet text="yarn add qwen-ai-provider" dark /> </Tab>
<Tab> <Snippet text="bun add qwen-ai-provider" dark /> </Tab> </Tabs>You can import the default provider instance qwen from qwen-ai-provider:
import { qwen } from 'qwen-ai-provider';
If you need a customized setup, you can import createQwen from qwen-ai-provider and create a provider instance with your settings:
import { createQwen } from 'qwen-ai-provider';
const qwen = createQwen({
// optional settings, e.g.
// baseURL: 'https://qwen/api/v1',
});
You can use the following optional settings to customize the Qwen provider instance:
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers.
The default prefix is https://dashscope-intl.aliyuncs.com/compatible-mode/v1.
apiKey string
API key that is being sent using the Authorization header.
It defaults to the DASHSCOPE_API_KEY environment variable.
headers Record<string,string>
Custom headers to include in the requests.
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation.
Defaults to the global fetch function.
You can use it as a middleware to intercept requests,
or to provide a custom fetch implementation for e.g. testing.
You can create models that call the Qwen chat API using a provider instance.
The first argument is the model id, e.g. qwen-plus.
Some Qwen chat models support tool calls.
const model = qwen('qwen-plus');
You can use Qwen language models to generate text with the generateText function:
import { qwen } from 'qwen-ai-provider';
import { generateText } from 'ai';
const { text } = await generateText({
model: qwen('qwen-plus'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
Qwen language models can also be used in the streamText function and support structured data generation with Output
(see AI SDK Core).
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
|---|---|---|---|---|
qwen-vl-max | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
qwen-plus-latest | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
qwen-max | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
qwen2.5-72b-instruct | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
qwen2.5-14b-instruct-1m | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
qwen2.5-vl-72b-instruct | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
You can create models that call the Qwen embeddings API
using the .embeddingModel() factory method.
const model = qwen.embeddingModel('text-embedding-v3');
| Model | Default Dimensions | Maximum number of rows | Maximum tokens per row |
|---|---|---|---|
text-embedding-v3 | 1024 | 6 | 8,192 |