content/docs/03-agents/02-building-agents.mdx
The ToolLoopAgent provides a structured way to encapsulate LLM configuration, tools, and behavior into reusable components. It handles the agent loop for you, allowing the LLM to call tools multiple times in sequence to accomplish complex tasks. Define agents once and use them across your application.
When building AI applications, you often need to:
The ToolLoopAgent class provides a single place to define your agent's behavior.
Define an agent by instantiating the ToolLoopAgent class with your desired configuration:
import { ToolLoopAgent } from 'ai';
__PROVIDER_IMPORT__;
const myAgent = new ToolLoopAgent({
model: __MODEL__,
instructions: 'You are a helpful assistant.',
tools: {
// Your tools here
},
});
The ToolLoopAgent accepts all the same settings as generateText and streamText. Configure:
import { ToolLoopAgent } from 'ai';
__PROVIDER_IMPORT__;
const agent = new ToolLoopAgent({
model: __MODEL__,
instructions: 'You are an expert software engineer.',
});
Provide tools that the agent can use to accomplish tasks:
import { ToolLoopAgent, tool } from 'ai';
__PROVIDER_IMPORT__;
import { z } from 'zod';
const codeAgent = new ToolLoopAgent({
model: __MODEL__,
tools: {
runCode: tool({
description: 'Execute Python code',
inputSchema: z.object({
code: z.string(),
}),
execute: async ({ code }) => {
// Execute code and return result
return { output: 'Code executed successfully' };
},
}),
},
});
By default, agents run for 20 steps (stopWhen: stepCountIs(20)). In each step, the model either generates text or calls a tool. If it generates text, the agent completes. If it calls a tool, the AI SDK executes that tool.
To let agents call multiple tools in sequence, configure stopWhen to allow more steps. After each tool execution, the agent triggers a new generation where the model can call another tool or generate text:
import { ToolLoopAgent, stepCountIs } from 'ai';
__PROVIDER_IMPORT__;
const agent = new ToolLoopAgent({
model: __MODEL__,
stopWhen: stepCountIs(20), // Allow up to 20 steps
});
Each step represents one generation (which results in either text or a tool call). The loop continues until:
You can combine multiple conditions:
import { ToolLoopAgent, stepCountIs } from 'ai';
__PROVIDER_IMPORT__;
const agent = new ToolLoopAgent({
model: __MODEL__,
stopWhen: [
stepCountIs(20), // Maximum 20 steps
yourCustomCondition(), // Custom logic for when to stop
],
});
Learn more about loop control and stop conditions.
Control how the agent uses tools:
import { ToolLoopAgent } from 'ai';
__PROVIDER_IMPORT__;
const agent = new ToolLoopAgent({
model: __MODEL__,
tools: {
// your tools here
},
toolChoice: 'required', // Force tool use
// or toolChoice: 'none' to disable tools
// or toolChoice: 'auto' (default) to let the model decide
});
You can also force the use of a specific tool:
import { ToolLoopAgent } from 'ai';
__PROVIDER_IMPORT__;
const agent = new ToolLoopAgent({
model: __MODEL__,
tools: {
weather: weatherTool,
cityAttractions: attractionsTool,
},
toolChoice: {
type: 'tool',
toolName: 'weather', // Force the weather tool to be used
},
});
Define structured output schemas:
import { ToolLoopAgent, Output, stepCountIs } from 'ai';
__PROVIDER_IMPORT__;
import { z } from 'zod';
const analysisAgent = new ToolLoopAgent({
model: __MODEL__,
output: Output.object({
schema: z.object({
sentiment: z.enum(['positive', 'neutral', 'negative']),
summary: z.string(),
keyPoints: z.array(z.string()),
}),
}),
stopWhen: stepCountIs(10),
});
const { output } = await analysisAgent.generate({
prompt: 'Analyze customer feedback from the last quarter',
});
System instructions define your agent's behavior, personality, and constraints. They set the context for all interactions and guide how the agent responds to user queries and uses tools.
Set the agent's role and expertise:
const agent = new ToolLoopAgent({
model: __MODEL__,
instructions:
'You are an expert data analyst. You provide clear insights from complex data.',
});
Provide specific guidelines for agent behavior:
const codeReviewAgent = new ToolLoopAgent({
model: __MODEL__,
instructions: `You are a senior software engineer conducting code reviews.
Your approach:
- Focus on security vulnerabilities first
- Identify performance bottlenecks
- Suggest improvements for readability and maintainability
- Be constructive and educational in your feedback
- Always explain why something is an issue and how to fix it`,
});
Set boundaries and ensure consistent behavior:
const customerSupportAgent = new ToolLoopAgent({
model: __MODEL__,
instructions: `You are a customer support specialist for an e-commerce platform.
Rules:
- Never make promises about refunds without checking the policy
- Always be empathetic and professional
- If you don't know something, say so and offer to escalate
- Keep responses concise and actionable
- Never share internal company information`,
tools: {
checkOrderStatus,
lookupPolicy,
createTicket,
},
});
Guide how the agent should use available tools:
const researchAgent = new ToolLoopAgent({
model: __MODEL__,
instructions: `You are a research assistant with access to search and document tools.
When researching:
1. Always start with a broad search to understand the topic
2. Use document analysis for detailed information
3. Cross-reference multiple sources before drawing conclusions
4. Cite your sources when presenting information
5. If information conflicts, present both viewpoints`,
tools: {
webSearch,
analyzeDocument,
extractQuotes,
},
});
Control the output format and communication style:
const technicalWriterAgent = new ToolLoopAgent({
model: __MODEL__,
instructions: `You are a technical documentation writer.
Writing style:
- Use clear, simple language
- Avoid jargon unless necessary
- Structure information with headers and bullet points
- Include code examples where relevant
- Write in second person ("you" instead of "the user")
Always format responses in Markdown.`,
});
Once defined, you can use your agent in three ways:
Use generate() for one-time text generation:
const result = await myAgent.generate({
prompt: 'What is the weather like?',
});
console.log(result.text);
Use stream() for streaming responses:
const result = await myAgent.stream({
prompt: 'Tell me a story',
});
for await (const chunk of result.textStream) {
console.log(chunk);
}
Use createAgentUIStreamResponse() to create API responses for client applications:
// In your API route (e.g., app/api/chat/route.ts)
import { createAgentUIStreamResponse } from 'ai';
export async function POST(request: Request) {
const { messages } = await request.json();
return createAgentUIStreamResponse({
agent: myAgent,
uiMessages: messages,
});
}
Agents provide lifecycle callbacks that let you hook into different phases of the agent execution. These are useful for logging, observability, debugging, and custom telemetry.
const result = await myAgent.generate({
prompt: 'Research and summarize the latest AI trends',
experimental_onStart({ model, functionId }) {
console.log('Agent started', { model: model.modelId, functionId });
},
experimental_onStepStart({ stepNumber, model }) {
console.log(`Step ${stepNumber} starting`, { model: model.modelId });
},
experimental_onToolCallStart({ toolCall }) {
console.log(`Tool call starting: ${toolCall.toolName}`);
},
experimental_onToolCallFinish({ toolCall, durationMs, success }) {
console.log(`Tool call finished: ${toolCall.toolName} (${durationMs}ms)`, {
success,
});
},
onStepFinish({ stepNumber, usage, finishReason, toolCalls }) {
console.log(`Step ${stepNumber} completed:`, {
inputTokens: usage.inputTokens,
outputTokens: usage.outputTokens,
finishReason,
toolsUsed: toolCalls?.map(tc => tc.toolName),
});
},
onFinish({ totalUsage, steps }) {
console.log('Agent finished:', {
totalSteps: steps.length,
totalTokens: totalUsage.totalTokens,
});
},
});
The available lifecycle callbacks are:
experimental_onStart: Called once when the agent operation begins, before any LLM calls. Receives model info, prompt, settings, and telemetry metadata.experimental_onStepStart: Called before each step (LLM call). Receives the step number, model, messages being sent, tools, and prior steps.experimental_onToolCallStart: Called right before a tool's execute function runs. Receives the tool call object with tool name, call ID, and input.experimental_onToolCallFinish: Called right after a tool's execute function completes or errors. Receives the tool call, durationMs, and a success discriminator (output when successful, error when failed).onStepFinish: Called after each step finishes. Receives step results including usage, finish reason, and tool calls.onFinish: Called when all steps are finished and the response is complete. Receives all step results, total usage, and telemetry metadata.All lifecycle callbacks can be defined in the constructor for agent-wide tracking, in the generate()/stream() call for per-call tracking, or both. When both are provided, both are called (constructor first, then the method callback):
const agent = new ToolLoopAgent({
model: __MODEL__,
onStepFinish: async ({ stepNumber, usage }) => {
// Agent-wide logging
console.log(`Agent step ${stepNumber}:`, usage.totalTokens);
},
});
// Method-level callback runs after constructor callback
const result = await agent.generate({
prompt: 'Hello',
onStepFinish: async ({ stepNumber, usage }) => {
// Per-call tracking (e.g., for billing)
await trackUsage(stepNumber, usage);
},
});
You can infer types for your agent's UIMessages:
import { ToolLoopAgent, InferAgentUIMessage } from 'ai';
const myAgent = new ToolLoopAgent({
// ... configuration
});
// Infer the UIMessage type for UI components or persistence
export type MyAgentUIMessage = InferAgentUIMessage<typeof myAgent>;
Use this type in your client components with useChat:
'use client';
import { useChat } from '@ai-sdk/react';
import type { MyAgentUIMessage } from '@/agent/my-agent';
export function Chat() {
const { messages } = useChat<MyAgentUIMessage>();
// Full type safety for your messages and tools
}
Now that you understand building agents, you can: