content/docs/02-getting-started/08-tanstack-start.mdx
The AI SDK is a powerful TypeScript library designed to help developers build AI-powered applications.
In this quickstart tutorial, you'll build a simple agent with a streaming chat user interface. Along the way, you'll learn key concepts and techniques that are fundamental to using the AI SDK in your own projects.
If you are unfamiliar with the concepts of Prompt Engineering and HTTP Streaming, you can optionally read these documents first.
To follow this quickstart, you'll need:
If you haven't obtained your Vercel AI Gateway API key, you can do so by signing up on the Vercel website.
Start by creating a new TanStack Start application. This command will create a new directory named my-ai-app and set up a basic TanStack Start application inside it.
Navigate to the newly created directory:
<Snippet text="cd my-ai-app" />Install ai and @ai-sdk/react, the AI package and AI SDK's React hooks. The AI SDK's Vercel AI Gateway provider ships with the ai package. You'll also install zod, a schema validation library used for defining tool inputs.
<Tab>
<Snippet text="bun add ai @ai-sdk/react zod" dark />
</Tab>
Create a .env file in your project root and add your AI Gateway API key. This key authenticates your application with Vercel AI Gateway.
Edit the .env file:
AI_GATEWAY_API_KEY=xxxxxxxxx
Replace xxxxxxxxx with your actual Vercel AI Gateway API key.
Create a route handler, src/routes/api/chat.ts and add the following code:
import { streamText, UIMessage, convertToModelMessages } from 'ai';
__PROVIDER_IMPORT__;
import { createFileRoute } from '@tanstack/react-router';
export const Route = createFileRoute('/api/chat')({
server: {
handlers: {
POST: async ({ request }) => {
const { messages }: { messages: UIMessage[] } = await request.json();
const result = streamText({
model: __MODEL__,
messages: await convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
},
},
},
});
Let's take a look at what is happening in this code:
POST request handler using TanStack Start's server routes and extract messages from the body of the request. The messages variable contains a history of the conversation between you and the chatbot and provides the chatbot with the necessary context to make the next generation. The messages are of UIMessage type, which are designed for use in application UI - they contain the entire message history and associated metadata like timestamps.streamText, which is imported from the ai package. This function accepts a configuration object that contains a model provider and messages (defined in step 1). You can pass additional settings to further customize the model's behavior. The messages key expects a ModelMessage[] array. This type is different from UIMessage in that it does not include metadata, such as timestamps or sender information. To convert between these types, we use the convertToModelMessages function, which strips the UI-specific metadata and transforms the UIMessage[] array into the ModelMessage[] format that the model expects.streamText function returns a StreamTextResult. This result object contains the toUIMessageStreamResponse function which converts the result to a streamed response object.This Route Handler creates a POST request endpoint at /api/chat.
The AI SDK supports dozens of model providers through first-party, OpenAI-compatible, and community packages.
This quickstart uses the Vercel AI Gateway provider, which is the default global provider. This means you can access models using a simple string in the model configuration:
model: __MODEL__;
You can also explicitly import and use the gateway provider in two other equivalent ways:
// Option 1: Import from 'ai' package (included by default)
import { gateway } from 'ai';
model: gateway('anthropic/claude-sonnet-4.5');
// Option 2: Install and import from '@ai-sdk/gateway' package
import { gateway } from '@ai-sdk/gateway';
model: gateway('anthropic/claude-sonnet-4.5');
To use a different provider, install its package and create a provider instance. For example, to use OpenAI directly:
<div className="my-4"> <Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @ai-sdk/openai" dark /> </Tab> <Tab> <Snippet text="npm install @ai-sdk/openai" dark /> </Tab> <Tab> <Snippet text="yarn add @ai-sdk/openai" dark /> </Tab><Tab>
<Snippet text="bun add @ai-sdk/openai" dark />
</Tab>
import { openai } from '@ai-sdk/openai';
model: openai('gpt-5.1');
You can change the default global provider so string model references use your preferred provider everywhere in your application. Learn more about provider management.
Pick the approach that best matches how you want to manage providers across your application.
Now that you have a Route Handler that can query an LLM, it's time to setup your frontend. The AI SDK's UI package abstracts the complexity of a chat interface into one hook, useChat.
Update your index route (src/routes/index.tsx) with the following code to show a list of chat messages and provide a user message input:
import { createFileRoute } from '@tanstack/react-router';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export const Route = createFileRoute('/')({
component: Chat,
});
function Chat() {
const [input, setInput] = useState('');
const { messages, sendMessage } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(message => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <div key={`${message.id}-${i}`}>{part.text}</div>;
}
})}
</div>
))}
<form
onSubmit={e => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={e => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
This page utilizes the useChat hook, which will, by default, use the POST API route you created earlier (/api/chat). The hook provides functions and state for handling user input and form submission. The useChat hook provides multiple utility functions and state variables:
messages - the current chat messages (an array of objects with id, role, and parts properties).sendMessage - a function to send a message to the chat API.The component uses local state (useState) to manage the input field value, and handles form submission by calling sendMessage with the input text and then clearing the input field.
The LLM's response is accessed through the message parts array. Each message contains an ordered array of parts that represents everything the model generated in its response. These parts can include plain text, reasoning tokens, and more that you will see later. The parts array preserves the sequence of the model's outputs, allowing you to display or process each component in the order it was generated.
With that, you have built everything you need for your chatbot! To start your application, use the command:
<Snippet text="pnpm run dev" />Head to your browser and open http://localhost:3000. You should see an input field. Test it out by entering a message and see the AI chatbot respond in real-time! The AI SDK makes it fast and easy to build AI chat interfaces with TanStack Start.
While large language models (LLMs) have incredible generation capabilities, they struggle with discrete tasks (e.g. mathematics) and interacting with the outside world (e.g. getting the weather). This is where tools come in.
Tools are actions that an LLM can invoke. The results of these actions can be reported back to the LLM to be considered in the next response.
For example, if a user asks about the current weather, without tools, the model would only be able to provide general information based on its training data. But with a weather tool, it can fetch and provide up-to-date, location-specific weather information.
Let's enhance your chatbot by adding a simple weather tool.
Modify your src/routes/api/chat.ts file to include the new weather tool:
import { streamText, UIMessage, convertToModelMessages, tool } from 'ai';
__PROVIDER_IMPORT__;
import { createFileRoute } from '@tanstack/react-router';
import { z } from 'zod';
export const Route = createFileRoute('/api/chat')({
server: {
handlers: {
POST: async ({ request }) => {
const { messages }: { messages: UIMessage[] } = await request.json();
const result = streamText({
model: __MODEL__,
messages: await convertToModelMessages(messages),
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
inputSchema: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toUIMessageStreamResponse();
},
},
},
});
In this updated code:
You import the tool function from the ai package and z from zod for schema validation.
You define a tools object with a weather tool. This tool:
inputSchema using a Zod schema, specifying that it requires a location string to execute this tool. The model will attempt to extract this input from the context of the conversation. If it can't, it will ask the user for the missing information.execute function that simulates getting weather data (in this case, it returns a random temperature). This is an asynchronous function running on the server so you can fetch real data from an external API.Now your chatbot can "fetch" weather information for any location the user asks about. When the model determines it needs to use the weather tool, it will generate a tool call with the necessary input. The execute function will then be automatically run, and the tool output will be added to the messages as a tool message.
Try asking something like "What's the weather in New York?" and see how the model uses the new tool.
Notice the blank response in the UI? This is because instead of generating a text response, the model generated a tool call. You can access the tool call and subsequent tool result on the client via the tool-weather part of the message.parts array.
To display the tool invocation in your UI, update your src/routes/index.tsx file:
import { createFileRoute } from '@tanstack/react-router';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export const Route = createFileRoute('/')({
component: Chat,
});
function Chat() {
const [input, setInput] = useState('');
const { messages, sendMessage } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(message => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <div key={`${message.id}-${i}`}>{part.text}</div>;
case 'tool-weather':
return (
<pre key={`${message.id}-${i}`}>
{JSON.stringify(part, null, 2)}
</pre>
);
}
})}
</div>
))}
<form
onSubmit={e => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={e => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
With this change, you're updating the UI to handle different message parts. For text parts, you display the text content as before. For weather tool invocations, you display a JSON representation of the tool call and its result.
Now, when you ask about the weather, you'll see the tool call and its result displayed in your chat interface.
You may have noticed that while the tool is now visible in the chat interface, the model isn't using this information to answer your original query. This is because once the model generates a tool call, it has technically completed its generation.
To solve this, you can enable multi-step tool calls using stopWhen. By default, stopWhen is set to stepCountIs(1), which means generation stops after the first step when there are tool results. By changing this condition, you can allow the model to automatically send tool results back to itself to trigger additional generations until your specified stopping condition is met. In this case, you want the model to continue generating so it can use the weather tool results to answer your original question.
Modify your src/routes/api/chat.ts file to include the stopWhen condition:
import {
streamText,
UIMessage,
convertToModelMessages,
tool,
stepCountIs,
} from 'ai';
__PROVIDER_IMPORT__;
import { createFileRoute } from '@tanstack/react-router';
import { z } from 'zod';
export const Route = createFileRoute('/api/chat')({
server: {
handlers: {
POST: async ({ request }) => {
const { messages }: { messages: UIMessage[] } = await request.json();
const result = streamText({
model: __MODEL__,
messages: await convertToModelMessages(messages),
stopWhen: stepCountIs(5),
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
inputSchema: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
},
});
return result.toUIMessageStreamResponse();
},
},
},
});
In this updated code, you set stopWhen to be when stepCountIs(5), allowing the model to use up to 5 "steps" for any given generation.
Head back to the browser and ask about the weather in a location. You should now see the model using the weather tool results to answer your question.
By setting stopWhen: stepCountIs(5), you're allowing the model to use up to 5 "steps" for any given generation. This enables more complex interactions and allows the model to gather and process information over several steps if needed. You can see this in action by adding another tool to convert the temperature from Celsius to Fahrenheit.
Update your src/routes/api/chat.ts file to add a new tool to convert the temperature from Fahrenheit to Celsius:
import {
streamText,
UIMessage,
convertToModelMessages,
tool,
stepCountIs,
} from 'ai';
__PROVIDER_IMPORT__;
import { createFileRoute } from '@tanstack/react-router';
import { z } from 'zod';
export const Route = createFileRoute('/api/chat')({
server: {
handlers: {
POST: async ({ request }) => {
const { messages }: { messages: UIMessage[] } = await request.json();
const result = streamText({
model: __MODEL__,
messages: await convertToModelMessages(messages),
stopWhen: stepCountIs(5),
tools: {
weather: tool({
description: 'Get the weather in a location (fahrenheit)',
inputSchema: z.object({
location: z
.string()
.describe('The location to get the weather for'),
}),
execute: async ({ location }) => {
const temperature = Math.round(Math.random() * (90 - 32) + 32);
return {
location,
temperature,
};
},
}),
convertFahrenheitToCelsius: tool({
description: 'Convert a temperature in fahrenheit to celsius',
inputSchema: z.object({
temperature: z
.number()
.describe('The temperature in fahrenheit to convert'),
}),
execute: async ({ temperature }) => {
const celsius = Math.round((temperature - 32) * (5 / 9));
return {
celsius,
};
},
}),
},
});
return result.toUIMessageStreamResponse();
},
},
},
});
update your src/routes/index.tsx file to render the new temperature conversion tool:
import { createFileRoute } from '@tanstack/react-router';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export const Route = createFileRoute('/')({
component: Chat,
});
function Chat() {
const [input, setInput] = useState('');
const { messages, sendMessage } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(message => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <div key={`${message.id}-${i}`}>{part.text}</div>;
case 'tool-weather':
case 'tool-convertFahrenheitToCelsius':
return (
<pre key={`${message.id}-${i}`}>
{JSON.stringify(part, null, 2)}
</pre>
);
}
})}
</div>
))}
<form
onSubmit={e => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={e => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
This update handles the new tool-convertFahrenheitToCelsius part type, displaying the temperature conversion tool calls and results in the UI.
Now, when you ask "What's the weather in New York in celsius?", you should see a more complete interaction:
This multi-step approach allows the model to gather information and use it to provide more accurate and contextual responses, making your chatbot considerably more useful.
This simple example demonstrates how tools can expand your model's capabilities. You can create more complex tools to integrate with real APIs, databases, or any other external systems, allowing the model to access and process real-world data in real-time. Tools bridge the gap between the model's knowledge cutoff and current information.
You've built an AI chatbot using the AI SDK! From here, you have several paths to explore: