Back to Ai

createAgentUIStreamResponse

content/docs/07-reference/01-ai-sdk-core/18-create-agent-ui-stream-response.mdx

2.1.106.0 KB
Original Source

createAgentUIStreamResponse

The createAgentUIStreamResponse function executes an Agent, runs its streaming output as a UI message stream, and returns an HTTP Response object whose body is the live, streaming UI message output. This is designed for API routes that deliver real-time agent results, such as chat endpoints or streaming tool-use operations.

Import

<Snippet text={import { createAgentUIStreamResponse } from "ai"} prompt={false} />

Usage

ts
import { ToolLoopAgent, createAgentUIStreamResponse } from 'ai';
__PROVIDER_IMPORT__;

const agent = new ToolLoopAgent({
  model: __MODEL__,
  instructions: 'You are a helpful assistant.',
  tools: { weather: weatherTool, calculator: calculatorTool },
});

export async function POST(request: Request) {
  const { messages } = await request.json();

  // Optional: support cancellation (aborts on disconnect, etc.)
  const abortController = new AbortController();

  return createAgentUIStreamResponse({
    agent,
    uiMessages: messages,
    abortSignal: abortController.signal, // optional
    // ...other UIMessageStreamOptions like sendSources, experimental_transform, etc.
  });
}

Parameters

<PropertiesTable content={[ { name: 'agent', type: 'Agent', isRequired: true, description: 'The agent instance to stream responses from. Must implement .stream({ prompt, ... }) and define the tools property.', }, { name: 'uiMessages', type: 'unknown[]', isRequired: true, description: 'Array of input UI messages provided to the agent (e.g., user and assistant messages).', }, { name: 'abortSignal', type: 'AbortSignal', isRequired: false, description: 'Optional abort signal to cancel streaming, e.g., on client disconnect. This should be an AbortSignal instance.', }, { name: 'timeout', type: 'number | { totalMs?: number }', isRequired: false, description: 'Timeout in milliseconds. Can be specified as a number or as an object with a totalMs property. The call will be aborted if it takes longer than the specified timeout. Can be used alongside abortSignal.', }, { name: 'options', type: 'CALL_OPTIONS', isRequired: false, description: 'Optional agent call options, for agents with generic parameter CALL_OPTIONS.', }, { name: 'experimental_transform', type: 'StreamTextTransform | StreamTextTransform[]', isRequired: false, description: 'Optional stream transforms to post-process text output—the same as in lower-level streaming APIs.', }, { name: 'onStepFinish', type: 'ToolLoopAgentOnStepFinishCallback', isRequired: false, description: 'Callback invoked after each agent step (LLM/tool call) completes. Useful for tracking token usage or logging intermediate steps.', }, { name: '...UIMessageStreamOptions', type: 'UIMessageStreamOptions', isRequired: false, description: 'Other UI message output options—such as sendSources and more.', }, { name: 'headers', type: 'HeadersInit', isRequired: false, description: 'Optional HTTP headers to include in the Response object.', }, { name: 'status', type: 'number', isRequired: false, description: 'Optional HTTP status code.', }, { name: 'statusText', type: 'string', isRequired: false, description: 'Optional HTTP status text.', }, { name: 'consumeSseStream', type: '(options: { stream: ReadableStream<string> }) => PromiseLike<void> | void', isRequired: false, description: 'Optional function to consume the SSE stream. When provided, this function will be called with the SSE stream to handle consumption.', }, ]} />

Returns

A Promise<Response> whose body is a streaming UI message output from the agent. Use this as the return value of API/server handlers in serverless, Next.js, Express, Hono, or edge runtime contexts.

Example: Next.js API Route Handler

ts
import { createAgentUIStreamResponse } from 'ai';
import { MyCustomAgent } from '@/agent/my-custom-agent';

export async function POST(request: Request) {
  const { messages } = await request.json();

  return createAgentUIStreamResponse({
    agent: MyCustomAgent,
    uiMessages: messages,
    sendSources: true, // (optional)
    // headers, status, abortSignal, and other UIMessageStreamOptions also supported
  });
}

How It Works

    1. UI Message Validation: Validates the incoming uiMessages array according to the agent's specified tools and requirements.
    1. Model Message Conversion: Converts validated UI messages into the internal model message format for the agent.
    1. Streaming Agent Output: Invokes the agent’s .stream({ prompt, ... }) to get a stream of chunks (steps/UI messages).
    1. HTTP Response Creation: Wraps the output stream as a readable HTTP Response object that streams UI message chunks to the client.

Notes

  • Your agent must implement .stream({ prompt, ... }) and define a tools property (even if it's just {}) to work with this function.
  • Server Only: This API should only be called in backend/server-side contexts (API routes, edge/serverless/server route handlers, etc.). Not for browser use.
  • Additional options (headers, status, UI stream options, transforms, etc.) are available for advanced scenarios.
  • This leverages ReadableStream so your platform/client must support HTTP streaming consumption.

See Also