content/providers/01-ai-sdk-providers/08-amazon-bedrock.mdx
The Amazon Bedrock provider for the AI SDK contains language model support for the Amazon Bedrock APIs.
The Bedrock provider is available in the @ai-sdk/amazon-bedrock module. You can install it with
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @ai-sdk/amazon-bedrock" dark /> </Tab> <Tab> <Snippet text="npm install @ai-sdk/amazon-bedrock" dark /> </Tab> <Tab> <Snippet text="yarn add @ai-sdk/amazon-bedrock" dark /> </Tab>
<Tab> <Snippet text="bun add @ai-sdk/amazon-bedrock" dark /> </Tab> </Tabs>Access to Amazon Bedrock foundation models isn't granted by default. In order to gain access to a foundation model, an IAM user with sufficient permissions needs to request access to it through the console. Once access is provided to a model, it is available for all users in the account.
See the Model Access Docs for more information.
Step 1: Creating AWS Access Key and Secret Key
To get started, you'll need to create an AWS access key and secret key. Here's how:
Login to AWS Management Console
Create an IAM User
AmazonBedrockFullAccess policy attached to it.Create Access Key
.csv file containing the access key ID and secret access key.Step 2: Configuring the Access Key and Secret Key
Within your project add a .env file if you don't already have one. This file will be used to set the access key and secret key as environment variables. Add the following lines to the .env file:
AWS_ACCESS_KEY_ID=YOUR_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY=YOUR_SECRET_ACCESS_KEY
AWS_REGION=YOUR_REGION
Remember to replace YOUR_ACCESS_KEY_ID, YOUR_SECRET_ACCESS_KEY, and YOUR_REGION with the actual values from your AWS account.
When using AWS SDK, the SDK will automatically use the credentials chain to determine the credentials to use. This includes instance profiles, instance roles, ECS roles, EKS Service Accounts, etc. A similar behavior is possible using the AI SDK by not specifying the accessKeyId and secretAccessKey, sessionToken properties in the provider settings and instead passing a credentialProvider property.
Usage:
@aws-sdk/credential-providers package provides a set of credential providers that can be used to create a credential provider chain.
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @aws-sdk/credential-providers" dark /> </Tab> <Tab> <Snippet text="npm install @aws-sdk/credential-providers" dark /> </Tab> <Tab> <Snippet text="yarn add @aws-sdk/credential-providers" dark /> </Tab>
<Tab> <Snippet text="bun add @aws-sdk/credential-providers" dark /> </Tab> </Tabs>import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
import { fromNodeProviderChain } from '@aws-sdk/credential-providers';
const bedrock = createAmazonBedrock({
region: 'us-east-1',
credentialProvider: fromNodeProviderChain(),
});
You can import the default provider instance bedrock from @ai-sdk/amazon-bedrock:
import { bedrock } from '@ai-sdk/amazon-bedrock';
If you need a customized setup, you can import createAmazonBedrock from @ai-sdk/amazon-bedrock and create a provider instance with your settings:
import { createAmazonBedrock } from '@ai-sdk/amazon-bedrock';
const bedrock = createAmazonBedrock({
region: 'us-east-1',
accessKeyId: 'xxxxxxxxx',
secretAccessKey: 'xxxxxxxxx',
sessionToken: 'xxxxxxxxx',
});
You can use the following optional settings to customize the Amazon Bedrock provider instance:
region string
The AWS region that you want to use for the API calls.
It uses the AWS_REGION environment variable by default.
accessKeyId string
The AWS access key ID that you want to use for the API calls.
It uses the AWS_ACCESS_KEY_ID environment variable by default.
secretAccessKey string
The AWS secret access key that you want to use for the API calls.
It uses the AWS_SECRET_ACCESS_KEY environment variable by default.
sessionToken string
Optional. The AWS session token that you want to use for the API calls.
It uses the AWS_SESSION_TOKEN environment variable by default.
credentialProvider () => Promise<{ accessKeyId: string; secretAccessKey: string; sessionToken?: string; }>
Optional. The AWS credential provider chain that you want to use for the API calls. It uses the specified credentials by default.
apiKey string
Optional. API key for authenticating requests using Bearer token authentication.
When provided, this will be used instead of AWS SigV4 authentication.
It uses the AWS_BEARER_TOKEN_BEDROCK environment variable by default.
baseURL string
Optional. Base URL for the Bedrock API calls. Useful for custom endpoints or proxy configurations.
headers Record<string, string>
Optional. Custom headers to include in the requests.
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Optional. Custom fetch implementation. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
You can create models that call the Bedrock API using the provider instance.
The first argument is the model id, e.g. meta.llama3-70b-instruct-v1:0.
const model = bedrock('meta.llama3-70b-instruct-v1:0');
Amazon Bedrock models also support some model specific provider options that are not part of the standard call settings.
You can pass them in the providerOptions argument:
const model = bedrock('anthropic.claude-3-sonnet-20240229-v1:0');
await generateText({
model,
providerOptions: {
anthropic: {
additionalModelRequestFields: { top_k: 350 },
},
},
});
Documentation for additional settings based on the selected model can be found within the Amazon Bedrock Inference Parameter Documentation.
You can use Amazon Bedrock language models to generate text with the generateText function:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
const { text } = await generateText({
model: bedrock('meta.llama3-70b-instruct-v1:0'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
Amazon Bedrock language models can also be used in the streamText function
(see AI SDK Core).
The Amazon Bedrock provider supports file inputs, e.g. PDF files.
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
const result = await generateText({
model: bedrock('anthropic.claude-3-haiku-20240307-v1:0'),
messages: [
{
role: 'user',
content: [
{ type: 'text', text: 'Describe the pdf in detail.' },
{
type: 'file',
data: readFileSync('./data/ai.pdf'),
mediaType: 'application/pdf',
},
],
},
],
});
You can use the bedrock provider options to utilize Amazon Bedrock Guardrails:
import { type AmazonBedrockLanguageModelOptions } from '@ai-sdk/amazon-bedrock';
const result = await generateText({
model: bedrock('anthropic.claude-3-sonnet-20240229-v1:0'),
prompt: 'Write a story about space exploration.',
providerOptions: {
bedrock: {
guardrailConfig: {
guardrailIdentifier: '1abcd2ef34gh',
guardrailVersion: '1',
trace: 'enabled' as const,
streamProcessingMode: 'async',
},
} satisfies AmazonBedrockLanguageModelOptions,
},
});
Tracing information will be returned in the provider metadata if you have tracing enabled.
if (result.providerMetadata?.bedrock.trace) {
// ...
}
See the Amazon Bedrock Guardrails documentation for more information.
Amazon Bedrock supports citations for document-based inputs across compatible models. When enabled:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText, Output } from 'ai';
import { z } from 'zod';
import fs from 'fs';
const result = await generateText({
model: bedrock('apac.anthropic.claude-sonnet-4-20250514-v1:0'),
output: Output.object({
schema: z.object({
summary: z.string().describe('Summary of the PDF document'),
keyPoints: z.array(z.string()).describe('Key points from the PDF'),
}),
}),
messages: [
{
role: 'user',
content: [
{
type: 'text',
text: 'Summarize this PDF and provide key points.',
},
{
type: 'file',
data: readFileSync('./document.pdf'),
mediaType: 'application/pdf',
providerOptions: {
bedrock: {
citations: { enabled: true },
},
},
},
],
},
],
});
console.log('Response:', result.output);
In messages, you can use the providerOptions property to set cache points. Set the bedrock property in the providerOptions object to { cachePoint: { type: 'default' } } to create a cache point.
You can also specify a TTL (time-to-live) for cache points using the ttl property. Supported values are '5m' (5 minutes, default) and '1h' (1 hour). The 1-hour TTL is only supported by Claude Opus 4.5, Claude Haiku 4.5, and Claude Sonnet 4.5.
providerOptions: {
bedrock: { cachePoint: { type: 'default', ttl: '1h' } },
}
Cache usage information is returned in the providerMetadata object. See examples below.
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
const cyberpunkAnalysis =
'... literary analysis of cyberpunk themes and concepts ...';
const result = await generateText({
model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0'),
messages: [
{
role: 'system',
content: `You are an expert on William Gibson's cyberpunk literature and themes. You have access to the following academic analysis: ${cyberpunkAnalysis}`,
providerOptions: {
bedrock: { cachePoint: { type: 'default' } },
},
},
{
role: 'user',
content:
'What are the key cyberpunk themes that Gibson explores in Neuromancer?',
},
],
});
console.log(result.text);
console.log(result.providerMetadata?.bedrock?.usage);
// Shows cache read/write token usage, e.g.:
// {
// cacheReadInputTokens: 1337,
// cacheWriteInputTokens: 42,
// }
Cache points also work with streaming responses:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { streamText } from 'ai';
const cyberpunkAnalysis =
'... literary analysis of cyberpunk themes and concepts ...';
const result = streamText({
model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0'),
messages: [
{
role: 'assistant',
content: [
{ type: 'text', text: 'You are an expert on cyberpunk literature.' },
{ type: 'text', text: `Academic analysis: ${cyberpunkAnalysis}` },
],
providerOptions: { bedrock: { cachePoint: { type: 'default' } } },
},
{
role: 'user',
content:
'How does Gibson explore the relationship between humanity and technology?',
},
],
});
for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}
console.log(
'Cache token usage:',
(await result.providerMetadata)?.bedrock?.usage,
);
// Shows cache read/write token usage, e.g.:
// {
// cacheReadInputTokens: 1337,
// cacheWriteInputTokens: 42,
// }
The following Bedrock-specific metadata may be returned in providerMetadata.bedrock:
{ latency: 'optimized' }.{ type: 'on-demand' }.cacheWriteInputTokens and cacheDetails.Amazon Bedrock supports model creator-specific reasoning features:
claude-sonnet-4-5-20250929): enable via the reasoningConfig provider option and specifying a thinking budget in tokens (minimum: 1024, maximum: 64000).us.amazon.nova-2-lite-v1:0): enable via the reasoningConfig provider option and specifying a maximum reasoning effort level ('low' | 'medium' | 'high').import {
bedrock,
type AmazonBedrockLanguageModelOptions,
} from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
// Anthropic example
const anthropicResult = await generateText({
model: bedrock('us.anthropic.claude-sonnet-4-5-20250929-v1:0'),
prompt: 'How many people will live in the world in 2040?',
providerOptions: {
bedrock: {
reasoningConfig: { type: 'enabled', budgetTokens: 1024 },
} satisfies AmazonBedrockLanguageModelOptions,
},
});
console.log(anthropicResult.reasoningText); // reasoning text
console.log(anthropicResult.text); // text response
// Nova 2 example
const amazonResult = await generateText({
model: bedrock('us.amazon.nova-2-lite-v1:0'),
prompt: 'How many people will live in the world in 2040?',
providerOptions: {
bedrock: {
reasoningConfig: { type: 'enabled', maxReasoningEffort: 'medium' },
} satisfies AmazonBedrockLanguageModelOptions,
},
});
console.log(amazonResult.reasoningText); // reasoning text
console.log(amazonResult.text); // text response
See AI SDK UI: Chatbot for more details on how to integrate reasoning into your chatbot.
Claude Sonnet 4 models on Amazon Bedrock support an extended context window of up to 1 million tokens when using the context-1m-2025-08-07 beta feature.
import {
bedrock,
type AmazonBedrockLanguageModelOptions,
} from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
const result = await generateText({
model: bedrock('us.anthropic.claude-sonnet-4-20250514-v1:0'),
prompt: 'analyze this large document...',
providerOptions: {
bedrock: {
anthropicBeta: ['context-1m-2025-08-07'],
} satisfies AmazonBedrockLanguageModelOptions,
},
});
Via Anthropic, Amazon Bedrock provides three provider-defined tools that can be used to interact with external systems:
They are available via the tools property of the provider instance.
The Bash Tool allows running bash commands. Here's how to create and use it:
const bashTool = bedrock.tools.bash_20241022({
execute: async ({ command, restart }) => {
// Implement your bash command execution logic here
// Return the result of the command execution
},
});
Parameters:
command (string): The bash command to run. Required unless the tool is being restarted.restart (boolean, optional): Specifying true will restart this tool.The Text Editor Tool provides functionality for viewing and editing text files.
For Claude 4 models (Opus & Sonnet):
const textEditorTool = bedrock.tools.textEditor_20250429({
execute: async ({
command,
path,
file_text,
insert_line,
new_str,
insert_text,
old_str,
view_range,
}) => {
// Implement your text editing logic here
// Return the result of the text editing operation
},
});
For Claude 3.5 Sonnet and earlier models:
const textEditorTool = bedrock.tools.textEditor_20241022({
execute: async ({
command,
path,
file_text,
insert_line,
new_str,
insert_text,
old_str,
view_range,
}) => {
// Implement your text editing logic here
// Return the result of the text editing operation
},
});
Parameters:
command ('view' | 'create' | 'str_replace' | 'insert' | 'undo_edit'): The command to run. Note: undo_edit is only available in Claude 3.5 Sonnet and earlier models.path (string): Absolute path to file or directory, e.g. /repo/file.py or /repo.file_text (string, optional): Required for create command, with the content of the file to be created.insert_line (number, optional): Required for insert command. The line number after which to insert the new string.new_str (string, optional): New string for str_replace command.insert_text (string, optional): Required for insert command, containing the text to insert.old_str (string, optional): Required for str_replace command, containing the string to replace.view_range (number[], optional): Optional for view command to specify line range to show.When using the Text Editor Tool, make sure to name the key in the tools object correctly:
str_replace_based_edit_toolstr_replace_editor// For Claude 4 models
const response = await generateText({
model: bedrock('us.anthropic.claude-sonnet-4-20250514-v1:0'),
prompt:
"Create a new file called example.txt, write 'Hello World' to it, and run 'cat example.txt' in the terminal",
tools: {
str_replace_based_edit_tool: textEditorTool, // Claude 4 tool name
},
});
// For Claude 3.5 Sonnet and earlier
const response = await generateText({
model: bedrock('anthropic.claude-3-5-sonnet-20241022-v2:0'),
prompt:
"Create a new file called example.txt, write 'Hello World' to it, and run 'cat example.txt' in the terminal",
tools: {
str_replace_editor: textEditorTool, // Earlier models tool name
},
});
The Computer Tool enables control of keyboard and mouse actions on a computer:
const computerTool = bedrock.tools.computer_20241022({
displayWidthPx: 1920,
displayHeightPx: 1080,
displayNumber: 0, // Optional, for X11 environments
execute: async ({ action, coordinate, text }) => {
// Implement your computer control logic here
// Return the result of the action
// Example code:
switch (action) {
case 'screenshot': {
// multipart result:
return {
type: 'image',
data: fs
.readFileSync('./data/screenshot-editor.png')
.toString('base64'),
};
}
default: {
console.log('Action:', action);
console.log('Coordinate:', coordinate);
console.log('Text:', text);
return `executed ${action}`;
}
}
},
// map to tool result content for LLM consumption:
toModelOutput({ output }) {
return typeof output === 'string'
? [{ type: 'text', text: output }]
: [{ type: 'image', data: output.data, mediaType: 'image/png' }];
},
});
Parameters:
action ('key' | 'type' | 'mouse_move' | 'left_click' | 'left_click_drag' | 'right_click' | 'middle_click' | 'double_click' | 'screenshot' | 'cursor_position'): The action to perform.coordinate (number[], optional): Required for mouse_move and left_click_drag actions. Specifies the (x, y) coordinates.text (string, optional): Required for type and key actions.These tools can be used in conjunction with the anthropic.claude-3-5-sonnet-20240620-v1:0 model to enable more complex interactions and tasks.
| Model | Image Input | Object Generation | Tool Usage | Tool Streaming |
|---|---|---|---|---|
amazon.titan-tg1-large | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
amazon.titan-text-express-v1 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
amazon.titan-text-lite-v1 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
us.amazon.nova-premier-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.amazon.nova-pro-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.amazon.nova-lite-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.amazon.nova-micro-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-haiku-4-5-20251001-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-sonnet-4-20250514-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-sonnet-4-5-20250929-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-opus-4-20250514-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-opus-4-1-20250805-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-3-5-sonnet-20241022-v2:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-3-5-sonnet-20240620-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-3-opus-20240229-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-3-sonnet-20240229-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-3-haiku-20240307-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-sonnet-4-20250514-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-sonnet-4-5-20250929-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-opus-4-20250514-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-opus-4-1-20250805-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-3-5-sonnet-20241022-v2:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-3-5-sonnet-20240620-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-3-sonnet-20240229-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-3-opus-20240229-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-3-haiku-20240307-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
anthropic.claude-v2 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
anthropic.claude-v2:1 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
anthropic.claude-instant-v1 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
cohere.command-text-v14 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
cohere.command-light-text-v14 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
cohere.command-r-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Cross size={18} /> |
cohere.command-r-plus-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Check size={18} /> | <Cross size={18} /> |
us.deepseek.r1-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
meta.llama3-8b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
meta.llama3-70b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
meta.llama3-1-8b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
meta.llama3-1-70b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
meta.llama3-1-405b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
meta.llama3-2-1b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
meta.llama3-2-3b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
meta.llama3-2-11b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
meta.llama3-2-90b-instruct-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
us.meta.llama3-2-1b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.meta.llama3-2-3b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.meta.llama3-2-11b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.meta.llama3-2-90b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.meta.llama3-1-8b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.meta.llama3-1-70b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.meta.llama3-3-70b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.meta.llama4-scout-17b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.meta.llama4-maverick-17b-instruct-v1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
mistral.mistral-7b-instruct-v0:2 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
mistral.mixtral-8x7b-instruct-v0:1 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
mistral.mistral-large-2402-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
mistral.mistral-small-2402-v1:0 | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
us.mistral.pixtral-large-2502-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
openai.gpt-oss-120b-1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
openai.gpt-oss-20b-1:0 | <Cross size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
You can create models that call the Bedrock API Bedrock API
using the .embedding() factory method.
const model = bedrock.embedding('amazon.titan-embed-text-v1');
Bedrock Titan embedding model amazon.titan-embed-text-v2:0 supports several additional settings. You can pass them as an options argument:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { type AmazonBedrockEmbeddingModelOptions } from '@ai-sdk/amazon-bedrock';
import { embed } from 'ai';
const model = bedrock.embedding('amazon.titan-embed-text-v2:0');
const { embedding } = await embed({
model,
value: 'sunny day at the beach',
providerOptions: {
bedrock: {
dimensions: 512, // optional, number of dimensions for the embedding
normalize: true, // optional, normalize the output embeddings
} satisfies AmazonBedrockEmbeddingModelOptions,
},
});
The following optional provider options are available for Bedrock Titan embedding models:
dimensions: number
The number of dimensions the output embeddings should have. The following values are accepted: 1024 (default), 512, 256.
normalize boolean
Flag indicating whether or not to normalize the output embeddings. Defaults to true.
Amazon Nova embedding models support additional provider options:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { type AmazonBedrockEmbeddingModelOptions } from '@ai-sdk/amazon-bedrock';
import { embed } from 'ai';
const { embedding } = await embed({
model: bedrock.embedding('amazon.nova-embed-text-v2:0'),
value: 'sunny day at the beach',
providerOptions: {
bedrock: {
embeddingDimension: 1024, // optional, number of dimensions
embeddingPurpose: 'TEXT_RETRIEVAL', // optional, purpose of embedding
truncate: 'END', // optional, truncation behavior
} satisfies AmazonBedrockEmbeddingModelOptions,
},
});
The following optional provider options are available for Nova embedding models:
embeddingDimension number
The number of dimensions for the output embeddings. Supported values: 256, 384, 1024 (default), 3072.
embeddingPurpose string
The purpose of the embedding. Accepts: GENERIC_INDEX (default), TEXT_RETRIEVAL, IMAGE_RETRIEVAL, VIDEO_RETRIEVAL, DOCUMENT_RETRIEVAL, AUDIO_RETRIEVAL, GENERIC_RETRIEVAL, CLASSIFICATION, CLUSTERING.
truncate string
Truncation behavior when input exceeds the model's context length. Accepts: NONE, START, END (default).
Cohere embedding models on Bedrock require an inputType and support truncation:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { type AmazonBedrockEmbeddingModelOptions } from '@ai-sdk/amazon-bedrock';
import { embed } from 'ai';
const { embedding } = await embed({
model: bedrock.embedding('cohere.embed-english-v3'),
value: 'sunny day at the beach',
providerOptions: {
bedrock: {
inputType: 'search_document', // required for Cohere
truncate: 'END', // optional, truncation behavior
} satisfies AmazonBedrockEmbeddingModelOptions,
},
});
The following provider options are available for Cohere embedding models:
inputType string
Input type for Cohere embedding models. Accepts: search_document, search_query (default), classification, clustering.
truncate string
Truncation behavior when input exceeds the model's context length. Accepts: NONE, START, END.
| Model | Default Dimensions | Custom Dimensions |
|---|---|---|
amazon.titan-embed-text-v1 | 1536 | <Cross size={18} /> |
amazon.titan-embed-text-v2:0 | 1024 | <Check size={18} /> |
amazon.nova-embed-text-v2:0 | 1024 | <Check size={18} /> |
cohere.embed-english-v3 | 1024 | <Cross size={18} /> |
cohere.embed-multilingual-v3 | 1024 | <Cross size={18} /> |
You can create models that call the Bedrock Rerank API
using the .reranking() factory method.
const model = bedrock.reranking('cohere.rerank-v3-5:0');
You can use Amazon Bedrock reranking models to rerank documents with the rerank function:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { rerank } from 'ai';
const documents = [
'sunny day at the beach',
'rainy afternoon in the city',
'snowy night in the mountains',
];
const { ranking } = await rerank({
model: bedrock.reranking('cohere.rerank-v3-5:0'),
documents,
query: 'talk about rain',
topN: 2,
});
console.log(ranking);
// [
// { originalIndex: 1, score: 0.9, document: 'rainy afternoon in the city' },
// { originalIndex: 0, score: 0.3, document: 'sunny day at the beach' }
// ]
Amazon Bedrock reranking models support additional provider options that can be passed via providerOptions.bedrock:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { rerank } from 'ai';
const { ranking } = await rerank({
model: bedrock.reranking('cohere.rerank-v3-5:0'),
documents: ['sunny day at the beach', 'rainy afternoon in the city'],
query: 'talk about rain',
providerOptions: {
bedrock: {
nextToken: 'pagination_token_here',
},
},
});
The following provider options are available:
nextToken string
Token for pagination of results.
additionalModelRequestFields Record<string, unknown>
Additional model-specific request fields.
| Model |
|---|
amazon.rerank-v1:0 |
cohere.rerank-v3-5:0 |
You can create models that call the Bedrock API Bedrock API
using the .image() factory method.
For more on the Amazon Nova Canvas image model, see the Nova Canvas Overview.
<Note> The `amazon.nova-canvas-v1:0` model is available in the `us-east-1`, `eu-west-1`, and `ap-northeast-1` regions. </Note>const model = bedrock.image('amazon.nova-canvas-v1:0');
You can then generate images with the generateImage function:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateImage } from 'ai';
const { image } = await generateImage({
model: bedrock.image('amazon.nova-canvas-v1:0'),
prompt: 'A beautiful sunset over a calm ocean',
size: '512x512',
seed: 42,
});
You can also pass the providerOptions object to the generateImage function to customize the generation behavior:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateImage } from 'ai';
const { image } = await generateImage({
model: bedrock.image('amazon.nova-canvas-v1:0'),
prompt: 'A beautiful sunset over a calm ocean',
size: '512x512',
seed: 42,
providerOptions: {
bedrock: {
quality: 'premium',
negativeText: 'blurry, low quality',
cfgScale: 7.5,
style: 'PHOTOREALISM',
},
},
});
The following optional provider options are available for Amazon Nova Canvas:
quality string
The quality level for image generation. Accepts 'standard' or 'premium'.
negativeText string
Text describing what you don't want in the generated image.
cfgScale number
Controls how closely the generated image adheres to the prompt. Higher values result in images that are more closely aligned to the prompt.
style string
Predefined visual style for image generation.
Accepts one of:
3D_ANIMATED_FAMILY_FILM · DESIGN_SKETCH · FLAT_VECTOR_ILLUSTRATION ·
GRAPHIC_NOVEL_ILLUSTRATION · MAXIMALISM · MIDCENTURY_RETRO ·
PHOTOREALISM · SOFT_DIGITAL_PAINTING.
Documentation for additional settings can be found within the Amazon Bedrock User Guide for Amazon Nova Documentation.
Amazon Nova Canvas supports several image editing task types. When you provide input images via prompt.images, the model automatically detects the appropriate editing mode, or you can explicitly specify the taskType in provider options.
Create variations of an existing image while maintaining its core characteristics:
const imageBuffer = readFileSync('./input-image.png');
const { images } = await generateImage({
model: bedrock.image('amazon.nova-canvas-v1:0'),
prompt: {
text: 'Modernize the style, photo-realistic, 8k, hdr',
images: [imageBuffer],
},
providerOptions: {
bedrock: {
taskType: 'IMAGE_VARIATION',
similarityStrength: 0.7, // 0-1, higher = closer to original
negativeText: 'bad quality, low resolution',
},
},
});
similarityStrength number
Controls how similar the output is to the input image. Values range from 0 to 1, where higher values produce results closer to the original.
Edit specific parts of an image. You can define the area to modify using either a mask image or a text prompt:
Using a mask prompt (text-based selection):
const imageBuffer = readFileSync('./input-image.png');
const { images } = await generateImage({
model: bedrock.image('amazon.nova-canvas-v1:0'),
prompt: {
text: 'a cute corgi dog in the same style',
images: [imageBuffer],
},
providerOptions: {
bedrock: {
maskPrompt: 'cat', // Describe what to replace
},
},
seed: 42,
});
Using a mask image:
const image = readFileSync('./input-image.png');
const mask = readFileSync('./mask.png'); // White pixels = area to change
const { images } = await generateImage({
model: bedrock.image('amazon.nova-canvas-v1:0'),
prompt: {
text: 'A sunlit indoor lounge area with a pool containing a flamingo',
images: [image],
mask: mask,
},
});
maskPrompt string
A text description of the area to modify. The model will automatically identify and mask the described region.
Extend an image beyond its original boundaries:
const imageBuffer = readFileSync('./input-image.png');
const { images } = await generateImage({
model: bedrock.image('amazon.nova-canvas-v1:0'),
prompt: {
text: 'A beautiful sunset landscape with mountains',
images: [imageBuffer],
},
providerOptions: {
bedrock: {
taskType: 'OUTPAINTING',
maskPrompt: 'background',
outPaintingMode: 'DEFAULT', // or 'PRECISE'
},
},
});
outPaintingMode string
Controls how the outpainting is performed. Accepts 'DEFAULT' or 'PRECISE'.
Remove the background from an image:
const imageBuffer = readFileSync('./input-image.png');
const { images } = await generateImage({
model: bedrock.image('amazon.nova-canvas-v1:0'),
prompt: {
images: [imageBuffer],
},
providerOptions: {
bedrock: {
taskType: 'BACKGROUND_REMOVAL',
},
},
});
The following additional provider options are available for image editing:
taskType string
Explicitly set the editing task type. Accepts 'TEXT_IMAGE' (default for text-only), 'IMAGE_VARIATION', 'INPAINTING', 'OUTPAINTING', or 'BACKGROUND_REMOVAL'. When images are provided without an explicit taskType, the model defaults to 'IMAGE_VARIATION' (or 'INPAINTING' if a mask is provided).
maskPrompt string
Text description of the area to modify (for inpainting/outpainting). Alternative to providing a mask image.
similarityStrength number
For IMAGE_VARIATION: Controls similarity to the original (0-1).
outPaintingMode string
For OUTPAINTING: Controls the outpainting behavior ('DEFAULT' or 'PRECISE').
You can customize the generation behavior with optional options:
await generateImage({
model: bedrock.image('amazon.nova-canvas-v1:0'),
prompt: 'A beautiful sunset over a calm ocean',
size: '512x512',
seed: 42,
maxImagesPerCall: 1, // Maximum number of images to generate per API call
});
maxImagesPerCall number
Override the maximum number of images generated per API call. Default can vary by model, with 5 as a common default.
The Amazon Nova Canvas model supports custom sizes with constraints as follows:
For more, see Image generation access and usage.
| Model | Sizes |
|---|---|
amazon.nova-canvas-v1:0 | Custom sizes: 320-4096px per side (must be divisible by 16), aspect ratio 1:4 to 4:1, max 4.2M pixels |
The Amazon Bedrock provider will return the response headers associated with network requests made of the Bedrock servers.
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { generateText } from 'ai';
const { text } = await generateText({
model: bedrock('meta.llama3-70b-instruct-v1:0'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
console.log(result.response.headers);
Below is sample output where you can see the x-amzn-requestid header. This can
be useful for correlating Bedrock API calls with requests made by the AI SDK:
{
connection: 'keep-alive',
'content-length': '2399',
'content-type': 'application/json',
date: 'Fri, 07 Feb 2025 04:28:30 GMT',
'x-amzn-requestid': 'c9f3ace4-dd5d-49e5-9807-39aedfa47c8e'
}
This information is also available with streamText:
import { bedrock } from '@ai-sdk/amazon-bedrock';
import { streamText } from 'ai';
const result = streamText({
model: bedrock('meta.llama3-70b-instruct-v1:0'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}
console.log('Response headers:', (await result.response).headers);
With sample output as:
{
connection: 'keep-alive',
'content-type': 'application/vnd.amazon.eventstream',
date: 'Fri, 07 Feb 2025 04:33:37 GMT',
'transfer-encoding': 'chunked',
'x-amzn-requestid': 'a976e3fc-0e45-4241-9954-b9bdd80ab407'
}
The Bedrock Anthropic provider offers support for Anthropic's Claude models through Amazon Bedrock's native InvokeModel API. This provides full feature parity with the Anthropic API, including features that may not be available through the Converse API (such as stop_sequence in streaming responses).
For more information on Claude models available on Amazon Bedrock, see Claude on Amazon Bedrock.
You can import the default provider instance bedrockAnthropic from @ai-sdk/amazon-bedrock/anthropic:
import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
If you need a customized setup, you can import createBedrockAnthropic from @ai-sdk/amazon-bedrock/anthropic and create a provider instance with your settings:
import { createBedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
const bedrockAnthropic = createBedrockAnthropic({
region: 'us-east-1', // optional
accessKeyId: 'xxxxxxxxx', // optional
secretAccessKey: 'xxxxxxxxx', // optional
sessionToken: 'xxxxxxxxx', // optional
});
You can use the following optional settings to customize the Bedrock Anthropic provider instance:
region string
The AWS region that you want to use for the API calls.
It uses the AWS_REGION environment variable by default.
accessKeyId string
The AWS access key ID that you want to use for the API calls.
It uses the AWS_ACCESS_KEY_ID environment variable by default.
secretAccessKey string
The AWS secret access key that you want to use for the API calls.
It uses the AWS_SECRET_ACCESS_KEY environment variable by default.
sessionToken string
Optional. The AWS session token that you want to use for the API calls.
It uses the AWS_SESSION_TOKEN environment variable by default.
apiKey string
API key for authenticating requests using Bearer token authentication.
When provided, this will be used instead of AWS SigV4 authentication.
It uses the AWS_BEARER_TOKEN_BEDROCK environment variable by default.
baseURL string
Base URL for the Bedrock API calls. Useful for custom endpoints or proxy configurations.
headers Resolvable<Record<string, string | undefined>>
Headers to include in the requests.
fetch (input: RequestInfo, init?: RequestInit) => Promise<Response>
Custom fetch implementation. You can use it as a middleware to intercept requests, or to provide a custom fetch implementation for e.g. testing.
credentialProvider () => PromiseLike<BedrockCredentials>
The AWS credential provider to use for the Bedrock provider to get dynamic
credentials similar to the AWS SDK. Setting a provider here will cause its
credential values to be used instead of the accessKeyId, secretAccessKey,
and sessionToken settings.
You can create models that call the Anthropic Messages API using the provider instance.
The first argument is the model id, e.g. us.anthropic.claude-3-5-sonnet-20241022-v2:0.
const model = bedrockAnthropic('us.anthropic.claude-3-5-sonnet-20241022-v2:0');
You can use Bedrock Anthropic language models to generate text with the generateText function:
import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
import { generateText } from 'ai';
const { text } = await generateText({
model: bedrockAnthropic('us.anthropic.claude-3-5-sonnet-20241022-v2:0'),
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
In the messages and message parts, you can use the providerOptions property to set cache control breakpoints.
You need to set the anthropic property in the providerOptions object to { cacheControl: { type: 'ephemeral' } } to set a cache control breakpoint.
import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
import { generateText } from 'ai';
const result = await generateText({
model: bedrockAnthropic('us.anthropic.claude-sonnet-4-5-20250929-v1:0'),
messages: [
{
role: 'system',
content: 'You are an expert assistant.',
providerOptions: {
anthropic: { cacheControl: { type: 'ephemeral' } },
},
},
{
role: 'user',
content: 'Explain quantum computing.',
},
],
});
The Bedrock Anthropic provider supports Anthropic's computer use tools:
They are available via the tools property of the provider instance.
import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
import { generateText, stepCountIs } from 'ai';
const result = await generateText({
model: bedrockAnthropic('us.anthropic.claude-sonnet-4-5-20250929-v1:0'),
tools: {
bash: bedrockAnthropic.tools.bash_20241022({
execute: async ({ command }) => {
// Implement your bash command execution logic here
return [{ type: 'text', text: `Executed: ${command}` }];
},
}),
},
prompt: 'List the files in my directory.',
stopWhen: stepCountIs(2),
});
import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
import { generateText, stepCountIs } from 'ai';
const result = await generateText({
model: bedrockAnthropic('us.anthropic.claude-sonnet-4-5-20250929-v1:0'),
tools: {
str_replace_editor: bedrockAnthropic.tools.textEditor_20241022({
execute: async ({ command, path, old_str, new_str, insert_text }) => {
// Implement your text editing logic here
return 'File updated successfully';
},
}),
},
prompt: 'Update my README file.',
stopWhen: stepCountIs(5),
});
import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
import { generateText, stepCountIs } from 'ai';
import fs from 'fs';
const result = await generateText({
model: bedrockAnthropic('us.anthropic.claude-sonnet-4-5-20250929-v1:0'),
tools: {
computer: bedrockAnthropic.tools.computer_20241022({
displayWidthPx: 1024,
displayHeightPx: 768,
execute: async ({ action, coordinate, text }) => {
if (action === 'screenshot') {
return {
type: 'image',
data: fs.readFileSync('./screenshot.png').toString('base64'),
};
}
return `executed ${action}`;
},
toModelOutput({ output }) {
return {
type: 'content',
value: [
typeof output === 'string'
? { type: 'text', text: output }
: {
type: 'image-data',
data: output.data,
mediaType: 'image/png',
},
],
};
},
}),
},
prompt: 'Take a screenshot.',
stopWhen: stepCountIs(3),
});
Anthropic has reasoning support for Claude 3.7 and Claude 4 models on Bedrock, including:
us.anthropic.claude-opus-4-6-v1us.anthropic.claude-opus-4-5-20251101-v1:0us.anthropic.claude-sonnet-4-5-20250929-v1:0us.anthropic.claude-opus-4-20250514-v1:0us.anthropic.claude-sonnet-4-20250514-v1:0us.anthropic.claude-opus-4-1-20250805-v1:0us.anthropic.claude-haiku-4-5-20251001-v1:0You can enable it using the thinking provider option and specifying a thinking budget in tokens.
import { bedrockAnthropic } from '@ai-sdk/amazon-bedrock/anthropic';
import { generateText } from 'ai';
const { text, reasoningText, reasoning } = await generateText({
model: bedrockAnthropic('us.anthropic.claude-sonnet-4-5-20250929-v1:0'),
prompt: 'How many people will live in the world in 2040?',
providerOptions: {
anthropic: {
thinking: { type: 'enabled', budgetTokens: 12000 },
},
},
});
console.log(reasoningText); // reasoning text
console.log(reasoning); // reasoning details including redacted reasoning
console.log(text); // text response
See AI SDK UI: Chatbot for more details on how to integrate reasoning into your chatbot.
| Model | Image Input | Object Generation | Tool Usage | Computer Use | Reasoning |
|---|---|---|---|---|---|
us.anthropic.claude-opus-4-6-v1 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-opus-4-5-20251101-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-sonnet-4-5-20250929-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-opus-4-20250514-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-sonnet-4-20250514-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-opus-4-1-20250805-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-haiku-4-5-20251001-v1:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> |
us.anthropic.claude-3-5-sonnet-20241022-v2:0 | <Check size={18} /> | <Check size={18} /> | <Check size={18} /> | <Cross size={18} /> | <Cross size={18} /> |
@ai-sdk/amazon-bedrock 2.xThe Amazon Bedrock provider was rewritten in version 2.x to remove the
dependency on the @aws-sdk/client-bedrock-runtime package.
The bedrockOptions provider setting previously available has been removed. If
you were using the bedrockOptions object, you should now use the region,
accessKeyId, secretAccessKey, and sessionToken settings directly instead.
Note that you may need to set all of these explicitly, e.g. even if you're not
using sessionToken, set it to undefined. If you're running in a serverless
environment, there may be default environment variables set by your containing
environment that the Amazon Bedrock provider will then pick up and could
conflict with the ones you're intending to use.