Back to Ai

Qwen

content/providers/03-community-providers/34-qwen.mdx

2.1.104.8 KB
Original Source

Qwen Provider

<Note type="warning"> This community provider is not yet compatible with AI SDK 5. Please wait for the provider to be updated or consider using an [AI SDK 5 compatible provider](/providers/ai-sdk-providers). </Note>

younis-ahmed/qwen-ai-provider is a community provider that uses Qwen to provide language model support for the AI SDK.

Setup

The Qwen provider is available in the qwen-ai-provider module. You can install it with

<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add qwen-ai-provider" dark /> </Tab> <Tab> <Snippet text="npm install qwen-ai-provider" dark /> </Tab> <Tab> <Snippet text="yarn add qwen-ai-provider" dark /> </Tab>

<Tab> <Snippet text="bun add qwen-ai-provider" dark /> </Tab> </Tabs>

Provider Instance

You can import the default provider instance qwen from qwen-ai-provider:

ts
import { qwen } from 'qwen-ai-provider';

If you need a customized setup, you can import createQwen from qwen-ai-provider and create a provider instance with your settings:

ts
import { createQwen } from 'qwen-ai-provider';

const qwen = createQwen({
  // optional settings, e.g.
  // baseURL: 'https://qwen/api/v1',
});

You can use the following optional settings to customize the Qwen provider instance:

  • baseURL string

    Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is https://dashscope-intl.aliyuncs.com/compatible-mode/v1.

  • apiKey string

    API key that is being sent using the Authorization header. It defaults to the DASHSCOPE_API_KEY environment variable.

  • headers Record<string,string>

    Custom headers to include in the requests.

  • fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>

    Custom fetch implementation. Defaults to the global fetch function. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.

Language Models

You can create models that call the Qwen chat API using a provider instance. The first argument is the model id, e.g. qwen-plus. Some Qwen chat models support tool calls.

ts
const model = qwen('qwen-plus');

Example

You can use Qwen language models to generate text with the generateText function:

ts
import { qwen } from 'qwen-ai-provider';
import { generateText } from 'ai';

const { text } = await generateText({
  model: qwen('qwen-plus'),
  prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});

Qwen language models can also be used in the streamText function and support structured data generation with Output (see AI SDK Core).

Model Capabilities

ModelImage InputObject GenerationTool UsageTool Streaming
qwen-vl-max<Check size={18} /><Check size={18} /><Check size={18} /><Check size={18} />
qwen-plus-latest<Cross size={18} /><Check size={18} /><Check size={18} /><Check size={18} />
qwen-max<Cross size={18} /><Check size={18} /><Check size={18} /><Check size={18} />
qwen2.5-72b-instruct<Cross size={18} /><Check size={18} /><Check size={18} /><Check size={18} />
qwen2.5-14b-instruct-1m<Cross size={18} /><Check size={18} /><Check size={18} /><Check size={18} />
qwen2.5-vl-72b-instruct<Check size={18} /><Check size={18} /><Check size={18} /><Check size={18} />
<Note> The table above lists popular models. Please see the [Qwen docs](https://www.alibabacloud.com/help/en/model-studio/getting-started/models) for a full list of available models. The table above lists popular models. You can also pass any available provider model ID as a string if needed. </Note>

Embedding Models

You can create models that call the Qwen embeddings API using the .embeddingModel() factory method.

ts
const model = qwen.embeddingModel('text-embedding-v3');

Model Capabilities

ModelDefault DimensionsMaximum number of rowsMaximum tokens per row
text-embedding-v3102468,192