Back to Ai

Provider Options

content/docs/02-foundations/06-provider-options.mdx

2.1.1010.8 KB
Original Source

Provider Options

Provider options let you pass provider-specific configuration that goes beyond the standard settings shared by all providers. They are set via the providerOptions property on functions like generateText and streamText.

ts
const result = await generateText({
  model: openai('gpt-5.2'),
  prompt: 'Explain quantum entanglement.',
  providerOptions: {
    openai: {
      reasoningEffort: 'low',
    },
  },
});

Provider options are namespaced by the provider name (e.g. openai, anthropic) so you can even include options for multiple providers in the same call — only the options matching the active provider are used. See Prompts: Provider Options for details on applying options at the message and message-part level.

<Note> For controlling reasoning effort, consider using the top-level [`reasoning` parameter](/docs/ai-sdk-core/reasoning) instead of provider-specific options. It provides a portable setting that works across all providers that support reasoning. Use provider-specific options only when you need features like exact token budgets. </Note>

Common Provider Options

The sections below cover the most frequently used provider options, focusing on reasoning and output control for OpenAI and Anthropic. For a complete reference, see the individual provider pages:


OpenAI

Reasoning Effort

For reasoning models (e.g. o3, o4-mini, gpt-5.2), reasoningEffort controls how much internal reasoning the model performs before responding. Lower values are faster and cheaper; higher values produce more thorough answers.

ts
import {
  openai,
  type OpenAILanguageModelResponsesOptions,
} from '@ai-sdk/openai';
import { generateText } from 'ai';

const { text, usage, providerMetadata } = await generateText({
  model: openai('gpt-5.2'),
  prompt: 'Invent a new holiday and describe its traditions.',
  providerOptions: {
    openai: {
      reasoningEffort: 'low', // 'none' | 'minimal' | 'low' | 'medium' | 'high' | 'xhigh'
    } satisfies OpenAILanguageModelResponsesOptions,
  },
});

console.log('Reasoning tokens:', providerMetadata?.openai?.reasoningTokens);
ValueBehavior
'none'No reasoning (GPT-5.1 models only)
'minimal'Bare-minimum reasoning
'low'Fast, concise reasoning
'medium'Balanced (default)
'high'Thorough reasoning
'xhigh'Maximum reasoning (GPT-5.1-Codex-Max only)
<Note> `'none'` and `'xhigh'` are only supported on specific models. Using them with unsupported models will result in an error. </Note>

Reasoning Summary

When working with reasoning models, you may want to see how the model arrived at its answer. The reasoningSummary option surfaces the model's thought process.

Streaming

ts
import {
  openai,
  type OpenAILanguageModelResponsesOptions,
} from '@ai-sdk/openai';
import { streamText } from 'ai';

const result = streamText({
  model: openai('gpt-5.2'),
  prompt: 'Tell me about the Mission burrito debate in San Francisco.',
  providerOptions: {
    openai: {
      reasoningSummary: 'detailed', // 'auto' | 'detailed'
    } satisfies OpenAILanguageModelResponsesOptions,
  },
});

for await (const part of result.fullStream) {
  if (part.type === 'reasoning') {
    console.log(`Reasoning: ${part.textDelta}`);
  } else if (part.type === 'text-delta') {
    process.stdout.write(part.textDelta);
  }
}

Non-streaming

ts
import {
  openai,
  type OpenAILanguageModelResponsesOptions,
} from '@ai-sdk/openai';
import { generateText } from 'ai';

const result = await generateText({
  model: openai('gpt-5.2'),
  prompt: 'Tell me about the Mission burrito debate in San Francisco.',
  providerOptions: {
    openai: {
      reasoningSummary: 'auto',
    } satisfies OpenAILanguageModelResponsesOptions,
  },
});

console.log('Reasoning:', result.reasoning);
ValueBehavior
'auto'Condensed summary of reasoning
'detailed'Comprehensive reasoning output

Text Verbosity

Control the length and detail of the model's text response independently of reasoning:

ts
import {
  openai,
  type OpenAILanguageModelResponsesOptions,
} from '@ai-sdk/openai';
import { generateText } from 'ai';

const result = await generateText({
  model: openai('gpt-5-mini'),
  prompt: 'Write a poem about a boy and his first pet dog.',
  providerOptions: {
    openai: {
      textVerbosity: 'low', // 'low' | 'medium' | 'high'
    } satisfies OpenAILanguageModelResponsesOptions,
  },
});
ValueBehavior
'low'Terse, minimal responses
'medium'Balanced detail (default)
'high'Verbose, comprehensive responses

Anthropic

Thinking (Extended Reasoning)

Anthropic's thinking feature gives Claude models a dedicated "thinking" phase before they respond. You enable it by providing a thinking object with a token budget.

ts
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
import { generateText } from 'ai';

const { text, reasoning, reasoningText } = await generateText({
  model: anthropic('claude-opus-4-20250514'),
  prompt: 'How many people will live in the world in 2040?',
  providerOptions: {
    anthropic: {
      thinking: { type: 'enabled', budgetTokens: 12000 },
    } satisfies AnthropicLanguageModelOptions,
  },
});

console.log('Reasoning:', reasoningText);
console.log('Answer:', text);

The budgetTokens value sets the upper limit on how many tokens the model can use for its internal reasoning. Higher budgets allow deeper reasoning but increase latency and cost.

<Note> Thinking is supported on `claude-opus-4-20250514`, `claude-sonnet-4-20250514`, and `claude-sonnet-4-5-20250929` models. </Note>

Effort

The effort option provides a simpler way to control reasoning depth without specifying a token budget. It affects thinking, text responses, and function calls.

ts
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
import { generateText } from 'ai';

const { text, usage } = await generateText({
  model: anthropic('claude-opus-4-20250514'),
  prompt: 'How many people will live in the world in 2040?',
  providerOptions: {
    anthropic: {
      effort: 'low', // 'low' | 'medium' | 'high'
    } satisfies AnthropicLanguageModelOptions,
  },
});
ValueBehavior
'low'Minimal reasoning, fastest responses
'medium'Balanced reasoning
'high'Thorough reasoning (default)

Fast Mode

For claude-opus-4-6, the speed option enables approximately 2.5x faster output token speeds:

ts
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
import { generateText } from 'ai';

const { text } = await generateText({
  model: anthropic('claude-opus-4-6'),
  prompt: 'Write a short poem about the sea.',
  providerOptions: {
    anthropic: {
      speed: 'fast', // 'fast' | 'standard'
    } satisfies AnthropicLanguageModelOptions,
  },
});

Combining Options

You can combine multiple provider options in a single call. For example, using both reasoning effort and reasoning summaries with OpenAI:

ts
import {
  openai,
  type OpenAILanguageModelResponsesOptions,
} from '@ai-sdk/openai';
import { generateText } from 'ai';

const result = await generateText({
  model: openai('gpt-5.2'),
  prompt: 'What are the implications of quantum computing for cryptography?',
  providerOptions: {
    openai: {
      reasoningEffort: 'high',
      reasoningSummary: 'detailed',
    } satisfies OpenAILanguageModelResponsesOptions,
  },
});

Or enabling thinking with a low effort level for Anthropic:

ts
import { anthropic, AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
import { generateText } from 'ai';

const result = await generateText({
  model: anthropic('claude-opus-4-20250514'),
  prompt: 'Explain the Riemann hypothesis in simple terms.',
  providerOptions: {
    anthropic: {
      thinking: { type: 'enabled', budgetTokens: 8000 },
      effort: 'medium',
    } satisfies AnthropicLanguageModelOptions,
  },
});

Using Provider Options with the AI Gateway

Provider options work the same way when using the Vercel AI Gateway. Use the underlying provider name (e.g. openai, anthropic) as the key — not gateway. The AI Gateway forwards these options to the target provider automatically.

ts
import type { OpenAILanguageModelResponsesOptions } from '@ai-sdk/openai';
import { generateText } from 'ai';

const result = await generateText({
  model: 'openai/gpt-5.2', // AI Gateway model string
  prompt: 'What are the implications of quantum computing for cryptography?',
  providerOptions: {
    openai: {
      reasoningEffort: 'high',
      reasoningSummary: 'detailed',
    } satisfies OpenAILanguageModelResponsesOptions,
  },
});

You can also combine gateway-specific options (like routing and fallbacks) with provider-specific options in the same call:

ts
import type { AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';
import type { GatewayLanguageModelOptions } from '@ai-sdk/gateway';
import { generateText } from 'ai';

const result = await generateText({
  model: 'anthropic/claude-sonnet-4',
  prompt: 'Explain quantum computing',
  providerOptions: {
    // Gateway-specific: control routing
    gateway: {
      order: ['vertex', 'anthropic'],
    } satisfies GatewayLanguageModelOptions,
    // Provider-specific: enable reasoning
    anthropic: {
      thinking: { type: 'enabled', budgetTokens: 12000 },
    } satisfies AnthropicLanguageModelOptions,
  },
});

For more on gateway routing, fallbacks, and other gateway-specific options, see the AI Gateway provider documentation.

Type Safety

Each provider exports a type for its options, which you can use with satisfies to get autocomplete and catch typos at build time:

ts
import { type OpenAILanguageModelResponsesOptions } from '@ai-sdk/openai';
import { type AnthropicLanguageModelOptions } from '@ai-sdk/anthropic';

For a full list of available options, see the provider-specific documentation: