content/providers/05-observability/mlflow.mdx
MLflow Tracing provides automatic tracing for applications built with the Vercel AI SDK (the ai package) via OpenTelemetry, unlocking observability for TypeScript and JavaScript apps.
When enabled, MLflow records:
It is fairly straightforward to enable MLflow tracing for Vercel AI SDK if you are using NextJS.
<Note> No app handy? Try Vercel’s demo chatbot: https://vercel.com/templates/next.js/ai-chatbot-telemetry </Note>mlflow server --backend-store-uri sqlite:///mlruns.db --port 5000
You can also start the server with Docker Compose; see the MLflow Setup Guide.
Add these to .env.local:
OTEL_EXPORTER_OTLP_ENDPOINT=<your-mlflow-tracking-server-endpoint>
OTEL_EXPORTER_OTLP_TRACES_HEADERS=x-mlflow-experiment-id=<your-experiment-id>
OTEL_EXPORTER_OTLP_TRACES_PROTOCOL=http/protobuf
For local testing: OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:5000.
Install the Vercel OpenTelemetry integration:
pnpm i @opentelemetry/api @vercel/otel
Create instrumentation.ts in your project root:
import { registerOTel } from '@vercel/otel';
export async function register() {
registerOTel({ serviceName: 'next-app' });
}
Then enable telemetry where you call the AI SDK (for example in route.ts):
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const { text } = await generateText({
model: openai('gpt-5'),
prompt,
experimental_telemetry: { isEnabled: true },
});
return new Response(JSON.stringify({ text }), {
headers: { 'Content-Type': 'application/json' },
});
}
See the Vercel OpenTelemetry docs for advanced options like context propagation.
Start your NextJS app and open MLflow UI at the tracking server endpoint (e.g., http://localhost:5000). Traces for AI SDK calls appear in the configured experiment.
For other Node.js frameworks, wire up the OpenTelemetry Node SDK and OTLP exporter manually.
import { init } from 'mlflow-tracing';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { NodeSDK } from '@opentelemetry/sdk-node';
import { SimpleSpanProcessor } from '@opentelemetry/sdk-trace-node';
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-proto';
const sdk = new NodeSDK({
spanProcessors: [
new SimpleSpanProcessor(
new OTLPTraceExporter({
url: '<your-mlflow-tracking-server-endpoint>/v1/traces',
headers: { 'x-mlflow-experiment-id': '<your-experiment-id>' },
}),
),
],
});
sdk.start();
init();
// Make an AI SDK call with telemetry enabled
const result = await generateText({
model: openai('gpt-5'),
prompt: 'What is MLflow?',
experimental_telemetry: { isEnabled: true },
});
console.log(result.text);
sdk.shutdown();
npx tsx main.ts
Streaming is supported. As with generateText, set experimental_telemetry.isEnabled to true.
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
const stream = await streamText({
model: openai('gpt-5'),
prompt: 'Explain vector databases in one paragraph.',
experimental_telemetry: { isEnabled: true },
});
for await (const part of stream.textStream) {
process.stdout.write(part);
}
To disable tracing for Vercel AI SDK, set experimental_telemetry: { isEnabled: false } on the AI SDK call.