examples/processors-with-ai-sdk/README.md
This example demonstrates how to use withMastra to wrap AI SDK models with Mastra processors and memory.
Install dependencies:
pnpm install
Copy .env.example to .env and add your OpenAI API key:
cp .env.example .env
pnpm start
This example shows:
withMastra to wrap a modelgenerateTextpnpm start:stream
pnpm start:tripwire
pnpm start:memory
The withMastra function wraps an AI SDK model with Mastra processors and memory:
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { withMastra } from '@mastra/ai-sdk';
// Create your processors
const myProcessor = {
id: 'my-processor',
async processInput({ messages }) {
// Transform input messages
return messages;
},
async processOutputResult({ messages }) {
// Transform output messages
return messages;
},
};
// Wrap the model with processors
const model = withMastra(openai('gpt-4o'), {
inputProcessors: [myProcessor],
outputProcessors: [myProcessor],
});
// Use with generateText or streamText
const { text } = await generateText({
model,
prompt: 'Hello!',
});
processInput - Runs before the LLM call, transforms input messagesprocessOutputStream - Runs on each streaming chunk (streaming only)processOutputResult - Runs after the LLM call, transforms output messagesProcessors can abort processing by calling abort(reason):
const guardProcessor = {
id: 'guard',
async processInput({ messages, abort }) {
for (const msg of messages) {
if (containsBadContent(msg)) {
abort('Content blocked by guard');
}
}
return messages;
},
};
When a processor aborts: