content/providers/05-observability/braintrust.mdx
Braintrust is an end-to-end platform for building AI applications. When building with the AI SDK, you can integrate Braintrust to log, monitor, and take action on real-world interactions.
Braintrust natively supports OpenTelemetry and works out of the box with the AI SDK, either via Next.js or Node.js.
If you are using Next.js, use the Braintrust exporter with @vercel/otel:
import { registerOTel } from '@vercel/otel';
import { BraintrustExporter } from 'braintrust';
// In your instrumentation.ts file
export function register() {
registerOTel({
serviceName: 'my-braintrust-app',
traceExporter: new BraintrustExporter({
parent: 'project_name:your-project-name',
filterAISpans: true, // Only send AI-related spans
}),
});
}
Traced LLM calls will appear under the Braintrust project or experiment provided in the parent field.
When you call the AI SDK, make sure to set experimental_telemetry:
const result = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'What is 2 + 2?',
experimental_telemetry: {
isEnabled: true,
metadata: {
query: 'weather',
location: 'San Francisco',
},
},
});
import { openai } from '@ai-sdk/openai';
import { streamText } from 'ai';
export async function POST(req: Request) {
const { prompt } = await req.json();
const result = await streamText({
model: openai('gpt-4o-mini'),
prompt,
experimental_telemetry: { isEnabled: true },
});
return result.toDataStreamResponse();
}
If you are using Node.js without a framework, you must configure the NodeSDK directly. In this case, it's more straightforward to use the BraintrustSpanProcessor.
First, install the necessary dependencies:
npm install ai @ai-sdk/openai braintrust @opentelemetry/sdk-node @opentelemetry/sdk-trace-base zod
Then, set up the OpenTelemetry SDK:
import { NodeSDK } from '@opentelemetry/sdk-node';
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
import { BraintrustSpanProcessor } from 'braintrust';
const sdk = new NodeSDK({
spanProcessors: [
new BraintrustSpanProcessor({
parent: 'project_name:your-project-name',
filterAISpans: true,
}),
],
});
sdk.start();
async function main() {
const result = await generateText({
model: openai('gpt-4o-mini'),
messages: [
{
role: 'user',
content: 'What are my orders and where are they? My user ID is 123',
},
],
tools: {
listOrders: tool({
description: 'list all orders',
parameters: z.object({ userId: z.string() }),
execute: async ({ userId }) =>
`User ${userId} has the following orders: 1`,
}),
viewTrackingInformation: tool({
description: 'view tracking information for a specific order',
parameters: z.object({ orderId: z.string() }),
execute: async ({ orderId }) =>
`Here is the tracking information for ${orderId}`,
}),
},
experimental_telemetry: {
isEnabled: true,
functionId: 'my-awesome-function',
metadata: {
something: 'custom',
someOtherThing: 'other-value',
},
},
maxSteps: 10,
});
await sdk.shutdown();
}
main().catch(console.error);
To see a step-by-step example, check out the Braintrust cookbook.
After you log your application in Braintrust, explore other workflows like: