content/docs/02-foundations/04-tools.mdx
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather).
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, when you ask an LLM for the "weather in London", and there is a weather tool available, it could call a tool with London as the argument. The tool would then fetch the weather data and return it to the LLM. The LLM can then use this information in its response.
A tool is an object that can be called by the model to perform a specific task.
You can use tools with generateText
and streamText by passing one or more tools to the tools parameter.
A tool consists of three properties:
description: An optional description of the tool that can influence when the tool is picked.inputSchema: A Zod schema or a JSON schema that defines the input required for the tool to run. The schema is consumed by the LLM, and also used to validate the LLM tool calls.execute: An optional async function that is called with the arguments from the tool call.If the LLM decides to use a tool, it will generate a tool call.
Tools with an execute function are run automatically when these calls are generated.
The output of the tool calls are returned using tool result objects.
You can automatically pass tool results back to the LLM
using multi-step calls with streamText and generateText.
The AI SDK supports three types of tools, each with different trade-offs:
Custom tools are tools you define entirely yourself, including the description, input schema, and execute function. They are provider-agnostic and give you full control.
import { tool } from 'ai';
import { z } from 'zod';
const weatherTool = tool({
description: 'Get the weather in a location',
inputSchema: z.object({
location: z.string().describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
// Your implementation
return { temperature: 72, conditions: 'sunny' };
},
});
When to use: When you need full control, want provider portability, or are implementing application-specific functionality.
Provider-defined tools are tools where the provider specifies the tool's inputSchema and description, but you provide the execute function. These are sometimes called "client tools" because execution happens on your side.
Examples include Anthropic's bash and text_editor tools. The model has been specifically trained to use these tools effectively, which can result in better performance for supported tasks.
import { anthropic } from '@ai-sdk/anthropic';
import { generateText } from 'ai';
const result = await generateText({
model: anthropic('claude-opus-4-5'),
tools: {
bash: anthropic.tools.bash_20250124({
execute: async ({ command }) => {
// Your implementation to run the command
return runCommand(command);
},
}),
},
prompt: 'List files in the current directory',
});
When to use: When the provider offers a tool the model is trained to use well, and you want better performance for that specific task.
Provider-executed tools are tools that run entirely on the provider's servers. You configure them, but the provider handles execution. These are sometimes called "server-side tools".
Examples include OpenAI's web search and Anthropic's code execution. These provide out-of-the-box functionality without requiring you to set up infrastructure.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const result = await generateText({
model: openai('gpt-5.2'),
tools: {
web_search: openai.tools.webSearch(),
},
prompt: 'What happened in the news today?',
});
When to use: When you want powerful functionality (like web search or sandboxed code execution) without managing the infrastructure yourself.
| Aspect | Custom Tools | Provider-Defined Tools | Provider-Executed Tools |
|---|---|---|---|
| Execution | Your code | Your code | Provider's servers |
| Schema | You define | Provider defines | Provider defines |
| Portability | Works with any provider | Provider-specific | Provider-specific |
| Model Training | General tool use | Optimized for the tool | Optimized for the tool |
| Setup | You implement everything | You implement execute | Configuration only |
Schemas are used to define and validate the tool input, tools outputs, and structured output generation.
The AI SDK supports the following schemas:
zodSchema()valibotSchema() from @ai-sdk/valibotjsonSchema()Given tools are JavaScript objects, they can be packaged and distributed through npm like any other library. This makes it easy to share reusable tools across projects and with the community.
Install a tool package and import the tools you need:
pnpm add some-tool-package
Then pass them directly to generateText, streamText, or your agent definition:
import { generateText, stepCountIs } from 'ai';
import { searchTool } from 'some-tool-package';
const { text } = await generateText({
model: 'anthropic/claude-haiku-4.5',
prompt: 'When was Vercel Ship AI?',
tools: {
webSearch: searchTool,
},
stopWhen: stepCountIs(10),
});
You can publish your own tool packages to npm for others to use. Simply export your tool objects from your package:
import { tool } from 'ai';
import { z } from 'zod';
export const myTool = tool({
description: 'A helpful tool',
inputSchema: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
// your tool logic
return result;
},
});
Anyone can then install and use your tools by importing them.
To get started, you can use the AI SDK Tool Package Template which provides a ready-to-use starting point for publishing your own tools.
When you work with tools, you typically need a mix of application-specific tools and general-purpose tools. The community has created various toolsets and resources to help you build and use tools.
These packages provide pre-built tools you can install and use immediately:
bash, readFile, and writeFile tools for AI agents. Supports @vercel/sandbox for full VM isolation.These are pre-built tools available as MCP servers:
These tutorials and guides help you build your own tools that integrate with specific services:
The AI SDK Core Tool Calling and Agents documentation has more information about tools and tool calling.