Back to Mastra

Reference: Agent.streamLegacy() (Legacy) | Streaming

docs/src/content/en/reference/streaming/agents/streamLegacy.mdx

2025-12-1818.9 KB
Original Source

Agent.streamLegacy() (Legacy)

:::warning

Deprecated: This method is deprecated and only works with V1 models. For V2 models, use the new .stream() method instead. See the migration guide for details on upgrading.

:::

The .streamLegacy() method is the legacy version of the agent streaming API, used for real-time streaming of responses from V1 model agents. This method accepts messages and optional streaming options.

Usage example

typescript
await agent.streamLegacy('message for agent')

Parameters

<PropertiesTable content={[ { name: 'messages', type: 'string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]', description: 'The messages to send to the agent. Can be a single string, array of strings, or structured message objects.', }, { name: 'options', type: 'AgentStreamOptions<OUTPUT, EXPERIMENTAL_OUTPUT>', isOptional: true, description: 'Optional configuration for the streaming process.', properties: [ { type: 'AgentStreamOptions', parameters: [ { name: 'abortSignal', type: 'AbortSignal', isOptional: true, description: "Signal object that allows you to abort the agent's execution. When the signal is aborted, all ongoing operations will be terminated.", }, { name: 'context', type: 'CoreMessage[]', isOptional: true, description: 'Additional context messages to provide to the agent.', }, { name: 'experimental_output', type: 'Zod schema | JsonSchema7', isOptional: true, description: 'Enables structured output generation alongside text generation and tool calls. The model will generate responses that conform to the provided schema.', }, { name: 'instructions', type: 'string', isOptional: true, description: "Custom instructions that override the agent's default instructions for this specific generation. Useful for dynamically modifying agent behavior without creating a new agent instance.", }, { name: 'output', type: 'Zod schema | JsonSchema7', isOptional: true, description: 'Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema.', }, { name: 'memory', type: 'object', isOptional: true, description: 'Configuration for memory. This is the preferred way to manage memory.', properties: [ { parameters: [ { name: 'thread', type: 'string | { id: string; metadata?: Record<string, any>, title?: string }', isOptional: false, description: 'The conversation thread, as a string ID or an object with an id and optional metadata.', }, ], }, { parameters: [ { name: 'resource', type: 'string', isOptional: false, description: 'Identifier for the user or resource associated with the thread.', }, ], }, { parameters: [ { name: 'options', type: 'MemoryConfig', isOptional: true, description: 'Configuration for memory behavior, like message history and semantic recall.', }, ], }, ], }, { name: 'maxSteps', type: 'number', isOptional: true, defaultValue: '5', description: 'Maximum number of execution steps allowed.', }, { name: 'maxRetries', type: 'number', isOptional: true, defaultValue: '2', description: 'Maximum number of retries. Set to 0 to disable retries.', }, { name: 'memoryOptions', type: 'MemoryConfig', isOptional: true, description: 'Deprecated. Use memory.options instead. Configuration options for memory management.', properties: [ { parameters: [ { name: 'lastMessages', type: 'number | false', isOptional: true, description: 'Number of recent messages to include in context, or false to disable.', }, ], }, { parameters: [ { name: 'semanticRecall', type: "boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }", isOptional: true, description: 'Enable semantic recall to find relevant past messages. Can be a boolean or detailed configuration.', }, ], }, { parameters: [ { name: 'workingMemory', type: 'WorkingMemory', isOptional: true, description: 'Configuration for working memory functionality.', }, ], }, { parameters: [ { name: 'threads', type: '{ generateTitle?: boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> } }', isOptional: true, description: 'Thread-specific configuration, including automatic title generation.', }, ], }, ], }, { name: 'onFinish', type: 'StreamTextOnFinishCallback<any> | StreamObjectOnFinishCallback<OUTPUT>', isOptional: true, description: 'Callback function called when streaming completes. Receives the final result.', }, { name: 'onStepFinish', type: 'StreamTextOnStepFinishCallback<any> | never', isOptional: true, description: 'Callback function called after each execution step. Receives step details as a JSON string. Unavailable for structured output', }, { name: 'resourceId', type: 'string', isOptional: true, description: 'Deprecated. Use memory.resource instead. Identifier for the user or resource interacting with the agent. Must be provided if threadId is provided.', }, { name: 'telemetry', type: 'TelemetrySettings', isOptional: true, description: 'Settings for telemetry collection during streaming.', properties: [ { parameters: [ { name: 'isEnabled', type: 'boolean', isOptional: true, description: 'Enable or disable telemetry. Disabled by default while experimental.', }, ], }, { parameters: [ { name: 'recordInputs', type: 'boolean', isOptional: true, description: 'Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information.', }, ], }, { parameters: [ { name: 'recordOutputs', type: 'boolean', isOptional: true, description: 'Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information.', }, ], }, { parameters: [ { name: 'functionId', type: 'string', isOptional: true, description: 'Identifier for this function. Used to group telemetry data by function.', }, ], }, ], }, { name: 'temperature', type: 'number', isOptional: true, description: "Controls randomness in the model's output. Higher values (e.g., 0.8) make the output more random, lower values (e.g., 0.2) make it more focused and deterministic.", }, { name: 'threadId', type: 'string', isOptional: true, description: 'Deprecated. Use memory.thread instead. Identifier for the conversation thread. Allows for maintaining context across multiple interactions. Must be provided if resourceId is provided.', }, { name: 'toolChoice', type: "'auto' | 'none' | 'required' | { type: 'tool'; toolName: string }", isOptional: true, defaultValue: "'auto'", description: 'Controls how the agent uses tools during streaming.', properties: [ { parameters: [ { name: "'auto'", type: 'string', description: 'Let the model decide whether to use tools (default).', }, ], }, { parameters: [ { name: "'none'", type: 'string', description: 'Do not use any tools.', }, ], }, { parameters: [ { name: "'required'", type: 'string', description: 'Require the model to use at least one tool.', }, ], }, { parameters: [ { name: "{ type: 'tool'; toolName: string }", type: 'object', description: 'Require the model to use a specific tool by name.', }, ], }, ], }, { name: 'toolsets', type: 'ToolsetsInput', isOptional: true, description: 'Additional toolsets to make available to the agent during streaming.', }, { name: 'clientTools', type: 'ToolsInput', isOptional: true, description: "Tools that are executed on the 'client' side of the request. These tools do not have execute functions in the definition.", }, { name: 'savePerStep', type: 'boolean', isOptional: true, description: 'Save messages incrementally after each stream step completes (default: false).', }, { name: 'providerOptions', type: 'Record<string, Record<string, JSONValue>>', isOptional: true, description: "Additional provider-specific options that are passed through to the underlying LLM provider. The structure is { providerName: { optionKey: value } }. For example: { openai: { reasoningEffort: 'high' }, anthropic: { maxTokens: 1000 } }.", properties: [ { parameters: [ { name: 'openai', type: 'Record<string, JSONValue>', isOptional: true, description: "OpenAI-specific options. Example: { reasoningEffort: 'high' }", }, ], }, { parameters: [ { name: 'anthropic', type: 'Record<string, JSONValue>', isOptional: true, description: 'Anthropic-specific options. Example: { maxTokens: 1000 }', }, ], }, { parameters: [ { name: 'google', type: 'Record<string, JSONValue>', isOptional: true, description: 'Google-specific options. Example: { safetySettings: [...] }', }, ], }, { parameters: [ { name: '[providerName]', type: 'Record<string, JSONValue>', isOptional: true, description: 'Other provider-specific options. The key is the provider name and the value is a record of provider-specific options.', }, ], }, ], }, { name: 'runId', type: 'string', isOptional: true, description: 'Unique ID for this generation run. Useful for tracking and debugging purposes.', }, { name: 'requestContext', type: 'RequestContext', isOptional: true, description: 'Request Context for dependency injection and contextual information.', }, { name: 'maxTokens', type: 'number', isOptional: true, description: 'Maximum number of tokens to generate.', }, { name: 'topP', type: 'number', isOptional: true, description: 'Nucleus sampling. This is a number between 0 and 1. It is recommended to set either temperature or topP, but not both.', }, { name: 'topK', type: 'number', isOptional: true, description: "Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses.", }, { name: 'presencePenalty', type: 'number', isOptional: true, description: 'Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).', }, { name: 'frequencyPenalty', type: 'number', isOptional: true, description: 'Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).', }, { name: 'stopSequences', type: 'string[]', isOptional: true, description: 'Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.', }, { name: 'seed', type: 'number', isOptional: true, description: 'The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.', }, { name: 'headers', type: 'Record<string, string | undefined>', isOptional: true, description: 'Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.', }, ], }, ], }, ]} />

Returns

<PropertiesTable content={[ { name: 'textStream', type: 'AsyncGenerator<string>', isOptional: true, description: 'Async generator that yields text chunks as they become available.', }, { name: 'fullStream', type: 'Promise<ReadableStream>', isOptional: true, description: 'Promise that resolves to a ReadableStream for the complete response.', }, { name: 'text', type: 'Promise<string>', isOptional: true, description: 'Promise that resolves to the complete text response.', }, { name: 'usage', type: 'Promise<{ totalTokens: number; promptTokens: number; completionTokens: number }>', isOptional: true, description: 'Promise that resolves to token usage information.', }, { name: 'finishReason', type: 'Promise<string>', isOptional: true, description: 'Promise that resolves to the reason why the stream finished.', }, { name: 'toolCalls', type: 'Promise<Array<ToolCall>>', isOptional: true, description: 'Promise that resolves to the tool calls made during the streaming process.', properties: [ { parameters: [ { name: 'toolName', type: 'string', required: true, description: 'The name of the tool invoked.', }, ], }, { parameters: [ { name: 'args', type: 'any', required: true, description: 'The arguments passed to the tool.', }, ], }, ], }, ]} />

Extended usage example

typescript
await agent.streamLegacy('message for agent', {
  temperature: 0.7,
  maxSteps: 3,
  memory: {
    thread: 'user-123',
    resource: 'test-app',
  },
  toolChoice: 'auto',
})

Migration to new API

:::info

The new .stream() method offers enhanced capabilities including AI SDK v5+ compatibility, better structured output handling, and improved callback system. See the migration guide for detailed migration instructions.

:::

Quick Migration Example

Before (Legacy)

typescript
const result = await agent.streamLegacy('message', {
  temperature: 0.7,
  maxSteps: 3,
  onFinish: result => console.log(result),
})

After (New API)

typescript
const result = await agent.stream('message', {
  modelSettings: {
    temperature: 0.7,
  },
  maxSteps: 3,
  onFinish: result => console.log(result),
})