docs/src/content/en/guides/migrations/vnext-to-standard-apis.mdx
As of v0.20.0 for @mastra/core, the following changes apply.
The original methods have been renamed and maintain backward compatibility with AI SDK v4 and v1 models.
.stream() → .streamLegacy().generate() → .generateLegacy()These are now the current APIs with full AI SDK v5 and v2 model compatibility.
.streamVNext() → .stream().generateVNext() → .generate()If you're already using .streamVNext() and .generateVNext() use find/replace to change methods to .stream() and .generate() respectively.
If you're using the old .stream() and .generate(), decide whether you want to upgrade or not. If you don't, use find/replace to change to .streamLegacy() and .generateLegacy().
Choose the migration path that fits your needs:
.stream() and .generate() calls to .streamLegacy() and .generateLegacy() respectively.No further changes required.
.streamVNext() and .generateVNext() calls to .stream() and .generate() respectively.No further changes required.
This will ensure that they're all v5 models now. Follow the guide below to understand the key differences and update your code accordingly.
The updated .stream() and .generate() methods differ from their legacy counterparts in behavior, compatibility, return types, and available options. This section highlights the most important changes you need to understand when migrating.
Legacy APIs
.generateLegacy().streamLegacy()Only support AI SDK v4 models (specificationVersion: 'v1')
Standard APIs
.generate().stream()Only support AI SDK v5 models (specificationVersion: 'v2')
This is enforced at runtime with clear error messages.
Legacy APIs
.generateLegacy()
Returns: GenerateTextResult or GenerateObjectResult
.streamLegacy()
Returns: StreamTextResult or StreamObjectResult
See the following API references for more information:
Standard APIs
.generate()
format: 'mastra' (default): Returns MastraModelOutput.getFullOutput()format: 'aisdk': Returns AISDKV5OutputStream.getFullOutput().stream() and awaits .getFullOutput().stream()
format: 'mastra' (default): Returns MastraModelOutput<OUTPUT>format: 'aisdk': Returns AISDKV5OutputStream<OUTPUT>See the following API references for more information:
No format option: Always return AI SDK v4 types
// Mastra native format (default)
const result = await agent.stream(messages)
Use format option to choose output:
'mastra' (default)'aisdk' (AI SDK v5 compatible)// AI SDK v5 compatibility
const result = await agent.stream(messages, {
format: 'aisdk',
})
The following options are available in the standard .stream() and generate(), but NOT in their legacy counterparts:
format - Choose between 'mastra' or 'aisdk' output format:
const result = await agent.stream(messages, {
format: 'aisdk', // or 'mastra' (default)
})
system - Custom system message (separate from instructions).
const result = await agent.stream(messages, {
system: 'You are a helpful assistant',
})
structuredOutput - Enhanced structured output with model override and custom options.
jsonPromptInjection - Used to override the default behaviour of passing response_format to the model. This will inject context into the prompt to coerce the model into returning structured outputs.
model - If a model is added this will create a sub agent to structure the response from the main agent. The main agent will call tools and return text, and the sub agent will return an object that conforms to your schema provided. This is a replacement for experimental_output.
errorStrategy - Determines what happens when the output doesn’t match the schema:
const result = await agent.generate(messages, {
structuredOutput: {
schema: z.object({
name: z.string(),
age: z.number(),
}),
model: 'openai/gpt-5.4', // Optional model override for structuring
errorStrategy: 'fallback',
fallbackValue: { name: 'unknown', age: 0 },
instructions: 'Extract user information', // Override default structuring instructions
},
})
stopWhen - Flexible stop conditions (step count, token limit, etc).
const result = await agent.stream(messages, {
stopWhen: ({ steps, totalTokens }) => steps >= 5 || totalTokens >= 10000,
})
providerOptions - Provider-specific options (e.g., OpenAI-specific settings)
const result = await agent.stream(messages, {
providerOptions: {
openai: {
store: true,
metadata: { userId: '123' },
},
},
})
onChunk - Callback for each streaming chunk.
const result = await agent.stream(messages, {
onChunk: chunk => {
console.log('Received chunk:', chunk)
},
})
onError - Error callback.
const result = await agent.stream(messages, {
onError: error => {
console.error('Stream error:', error)
},
})
onAbort - Abort callback.
const result = await agent.stream(messages, {
onAbort: () => {
console.log('Stream aborted')
},
})
activeTools - Specify which tools are active for this execution.
const result = await agent.stream(messages, {
activeTools: ['search', 'calculator'], // Only these tools will be available
})
abortSignal - AbortSignal for cancellation.
const controller = new AbortController()
const result = await agent.stream(messages, {
abortSignal: controller.signal,
})
// Later: controller.abort();
prepareStep - Callback before each step in multi-step execution.
const result = await agent.stream(messages, {
prepareStep: ({ step, state }) => {
console.log('About to execute step:', step)
return {
/* modified state */
}
},
})
requireToolApproval - Require approval for all tool calls.
const result = await agent.stream(messages, {
requireToolApproval: true,
})
temperature and other modelSettings.
Unified in modelSettings
const result = await agent.stream(messages, {
modelSettings: {
temperature: 0.7,
maxTokens: 1000,
topP: 0.9,
},
})
resourceId and threadId.
Moved to memory object.
const result = await agent.stream(messages, {
memory: {
resource: 'user-123',
thread: 'thread-456',
},
})
experimental_output
Use structuredOutput instead to allow for tool calls and an object return.
const result = await agent.generate(messages, {
structuredOutput: {
schema: z.object({
summary: z.string(),
}),
model: 'openai/gpt-5.4',
},
})
output
The output property is deprecated in favor of structuredOutput, to achieve the same results, omit the model and only pass structuredOutput.schema, optionally add jsonPromptInjection: true if your model doesn't natively support response_format.
const result = await agent.generate(messages, {
structuredOutput: {
schema: z.object({
name: z.string(),
}),
},
})
memoryOptions
Use memory instead.
const result = await agent.generate(messages, {
memory: {},
})
Legacy APIs
CoreMessage[]See the following API references for more information:
Standard APIs
ModelMessage[]
toolChoice uses the AI SDK v5 ToolChoice type.
type ToolChoice<TOOLS extends Record<string, unknown>> =
| 'auto'
| 'none'
| 'required'
| {
type: 'tool'
toolName: Extract<keyof TOOLS, string>
}
See the following API references for more information: