content/docs/04-ai-sdk-ui/50-stream-protocol.mdx
AI SDK UI functions such as useChat and useCompletion support both text streams and data streams.
The stream protocol defines how the data is streamed to the frontend on top of the HTTP protocol.
This page describes both protocols and how to use them in the backend and frontend.
You can use this information to develop custom backends and frontends for your use case, e.g., to provide compatible API endpoints that are implemented in a different language such as Python.
For instance, here's an example using FastAPI as a backend.
A text stream contains chunks in plain text, that are streamed to the frontend. Each chunk is then appended together to form a full text response.
Text streams are supported by useChat, useCompletion, and useObject.
When you use useChat or useCompletion, you need to enable text streaming
by setting the streamProtocol options to text.
You can generate text streams with streamText in the backend.
When you call toTextStreamResponse() on the result object,
a streaming HTTP response is returned.
Here is a Next.js example that uses the text stream protocol:
'use client';
import { useChat } from '@ai-sdk/react';
import { TextStreamChatTransport } from 'ai';
import { useState } from 'react';
export default function Chat() {
const [input, setInput] = useState('');
const { messages, sendMessage } = useChat({
transport: new TextStreamChatTransport({ api: '/api/chat' }),
});
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(message => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <div key={`${message.id}-${i}`}>{part.text}</div>;
}
})}
</div>
))}
<form
onSubmit={e => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={e => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
import { streamText, UIMessage, convertToModelMessages } from 'ai';
__PROVIDER_IMPORT__;
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: __MODEL__,
messages: await convertToModelMessages(messages),
});
return result.toTextStreamResponse();
}
A data stream follows a special protocol that the AI SDK provides to send information to the frontend.
The data stream protocol uses Server-Sent Events (SSE) format for improved standardization, keep-alive through ping, reconnect capabilities, and better cache handling.
<Note> When you provide data streams from a custom backend, you need to set the `x-vercel-ai-ui-message-stream` header to `v1`. </Note>The following stream parts are currently supported:
Indicates the beginning of a new message with metadata.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"start","messageId":"..."}
Text content is streamed using a start/delta/end pattern with unique IDs for each text block.
Indicates the beginning of a text block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"text-start","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d"}
Contains incremental text content for the text block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"text-delta","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d","delta":"Hello"}
Indicates the completion of a text block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"text-end","id":"msg_68679a454370819ca74c8eb3d04379630dd1afb72306ca5d"}
Reasoning content is streamed using a start/delta/end pattern with unique IDs for each reasoning block.
Indicates the beginning of a reasoning block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"reasoning-start","id":"reasoning_123"}
Contains incremental reasoning content for the reasoning block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"reasoning-delta","id":"reasoning_123","delta":"This is some reasoning"}
Indicates the completion of a reasoning block.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"reasoning-end","id":"reasoning_123"}
Reasoning file parts contain references to files generated as part of reasoning, such as images produced during the reasoning process.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"reasoning-file","url":"data:image/png;base64,iVBOR...","mediaType":"image/png"}
Source parts provide references to external content sources.
References to external URLs.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"source-url","sourceId":"https://example.com","url":"https://example.com"}
References to documents or files.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"source-document","sourceId":"https://example.com","mediaType":"file","title":"Title"}
The file parts contain references to files with their media type.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"file","url":"https://example.com/file.png","mediaType":"image/png"}
Custom parts represent provider-specific content that doesn't fit into the standard part types. The kind field identifies the specific custom content type in the format {provider}-{provider-type}.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"custom","kind":"openai-compaction","providerMetadata":{"openai":{"itemId":"cmp_123"}}}
Custom data parts allow streaming of arbitrary structured data with type-specific handling.
Format: Server-Sent Event with JSON object where the type includes a custom suffix
Example:
data: {"type":"data-weather","data":{"location":"SF","temperature":100}}
The data-* type pattern allows you to define custom data types that your frontend can handle specifically.
The error parts are appended to the message as they are received.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"error","errorText":"error message"}
Indicates the beginning of tool input streaming.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"tool-input-start","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","toolName":"getWeatherInformation"}
Incremental chunks of tool input as it's being generated.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"tool-input-delta","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","inputTextDelta":"San Francisco"}
Indicates that tool input is complete and ready for execution.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"tool-input-available","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","toolName":"getWeatherInformation","input":{"city":"San Francisco"}}
Contains the result of tool execution.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"tool-output-available","toolCallId":"call_fJdQDqnXeGxTmr4E3YPSR7Ar","output":{"city":"San Francisco","weather":"sunny"}}
A part indicating the start of a step.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"start-step"}
A part indicating that a step (i.e., one LLM API call in the backend) has been completed.
This part is necessary to correctly process multiple stitched assistant calls, e.g. when calling tools in the backend, and using steps in useChat at the same time.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"finish-step"}
A part indicating the completion of a message.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"finish"}
Indicates the stream was aborted.
Format: Server-Sent Event with JSON object
Example:
data: {"type":"abort","reason":"user cancelled"}
The stream ends with a special [DONE] marker.
Format: Server-Sent Event with literal [DONE]
Example:
data: [DONE]
The data stream protocol is supported
by useChat and useCompletion on the frontend and used by default.
useCompletion only supports the text and data stream parts.
On the backend, you can use toUIMessageStreamResponse() from the streamText result object to return a streaming HTTP response.
Here is a Next.js example that uses the UI message stream protocol:
'use client';
import { useChat } from '@ai-sdk/react';
import { useState } from 'react';
export default function Chat() {
const [input, setInput] = useState('');
const { messages, sendMessage } = useChat();
return (
<div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
{messages.map(message => (
<div key={message.id} className="whitespace-pre-wrap">
{message.role === 'user' ? 'User: ' : 'AI: '}
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return <div key={`${message.id}-${i}`}>{part.text}</div>;
}
})}
</div>
))}
<form
onSubmit={e => {
e.preventDefault();
sendMessage({ text: input });
setInput('');
}}
>
<input
className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
value={input}
placeholder="Say something..."
onChange={e => setInput(e.currentTarget.value)}
/>
</form>
</div>
);
}
import { streamText, UIMessage, convertToModelMessages } from 'ai';
__PROVIDER_IMPORT__;
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages }: { messages: UIMessage[] } = await req.json();
const result = streamText({
model: __MODEL__,
messages: await convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}