content/docs/07-reference/01-ai-sdk-core/67-simulate-streaming-middleware.mdx
simulateStreamingMiddleware()simulateStreamingMiddleware is a middleware function that simulates streaming behavior with responses from non-streaming language models. This is useful when you want to maintain a consistent streaming interface even when using models that only provide complete responses.
import { simulateStreamingMiddleware } from 'ai';
const middleware = simulateStreamingMiddleware();
<Snippet
text={import { simulateStreamingMiddleware } from "ai"}
prompt={false}
/>
This middleware doesn't accept any parameters.
Returns a middleware object that:
import { streamText } from 'ai';
import { wrapLanguageModel } from 'ai';
import { simulateStreamingMiddleware } from 'ai';
// Example with a non-streaming model
const result = streamText({
model: wrapLanguageModel({
model: nonStreamingModel,
middleware: simulateStreamingMiddleware(),
}),
prompt: 'Your prompt here',
});
// Now you can use the streaming interface
for await (const chunk of result.fullStream) {
// Process streaming chunks
}
The middleware:
ReadableStream that emits chunks in the correct sequence