content/cookbook/01-next/85-custom-stream-format.mdx
Create a custom stream to control the streaming format and structure of tool calls instead of using the built-in AI SDK data stream format (toUIMessageStream()).
fullStream (on StreamTextResult) gives you direct access to all model events. You can transform, filter, and structure these events into your own streaming format. This gives you the benefits of the AI SDK's unified provider interface without prescribing how you consume the stream.
You can:
For complete control over both the streaming format and the execution loop, combine this pattern with a manual agent loop.
Create a route handler that calls a model and then streams the responses in a custom format:
import { tools } from '@/ai/tools'; // your tools
import { stepCountIs, streamText } from 'ai';
__PROVIDER_IMPORT__;
export type StreamEvent =
| { type: 'text'; text: string }
| { type: 'tool-call'; toolName: string; input: unknown }
| { type: 'tool-result'; toolName: string; result: unknown };
const encoder = new TextEncoder();
function formatEvent(event: StreamEvent): Uint8Array {
return encoder.encode('data: ' + JSON.stringify(event) + '\n\n');
}
export async function POST(request: Request) {
const { prompt } = await request.json();
const result = streamText({
prompt,
model: __MODEL__,
tools,
stopWhen: stepCountIs(5),
});
const transformStream = new TransformStream({
transform(chunk, controller) {
switch (chunk.type) {
case 'text-delta':
controller.enqueue(formatEvent({ type: 'text', text: chunk.text }));
break;
case 'tool-call':
controller.enqueue(
formatEvent({
type: 'tool-call',
toolName: chunk.toolName,
input: chunk.input,
}),
);
break;
case 'tool-result':
controller.enqueue(
formatEvent({
type: 'tool-result',
toolName: chunk.toolName,
result: chunk.output,
}),
);
break;
}
},
});
return new Response(result.fullStream.pipeThrough(transformStream), {
headers: { 'Content-Type': 'text/event-stream' },
});
}
The route uses streamText to process the prompt with tools. Each event (text, tool calls, tool results) is encoded as a Server-Sent Event with a data: prefix and sent to the client.
Create a simple interface that parses and displays the stream:
'use client';
import { useState } from 'react';
import { StreamEvent } from './api/stream/route';
export default function Home() {
const [prompt, setPrompt] = useState('');
const [events, setEvents] = useState<StreamEvent[]>([]);
const [isStreaming, setIsStreaming] = useState(false);
const handleSubmit = async () => {
setEvents([]);
setIsStreaming(true);
setPrompt('');
const response = await fetch('/api/stream', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ prompt }),
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
if (reader) {
let buffer = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop() || '';
for (const line of lines) {
if (line.trim()) {
const dataStr = line.replace(/^data: /, '');
const event = JSON.parse(dataStr) as StreamEvent;
setEvents(prev => [...prev, event]);
}
}
}
}
setIsStreaming(false);
};
return (
<div>
<input
value={prompt}
onChange={e => setPrompt(e.target.value)}
placeholder="Enter a prompt..."
/>
<button onClick={handleSubmit} disabled={isStreaming}>
{isStreaming ? 'Streaming...' : 'Send'}
</button>
<pre>{JSON.stringify(events, null, 2)}</pre>
</div>
);
}
The client uses the Fetch API to stream responses from the server. Since the server sends Server-Sent Events (newline-delimited with data: prefix), the client:
getReader()data: prefix and parses the JSON, then appends it to the events listEvents are rendered in order as they arrive, giving you a linear representation of the AI's response.