content/providers/05-observability/laminar.mdx
Laminar is the open-source platform for tracing and evaluating AI applications.
Laminar features:
<Note> A version of this guide is available in [Laminar's docs](https://docs.lmnr.ai/tracing/integrations/vercel-ai-sdk). </Note>Laminar's tracing is based on OpenTelemetry. It supports AI SDK telemetry.
To start with Laminar's tracing, first install the @lmnr-ai/lmnr package.
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @lmnr-ai/lmnr" dark /> </Tab> <Tab> <Snippet text="npm install @lmnr-ai/lmnr" dark /> </Tab> <Tab> <Snippet text="yarn add @lmnr-ai/lmnr" dark /> </Tab>
<Tab> <Snippet text="bun add @lmnr-ai/lmnr" dark /> </Tab> </Tabs>Then, either sign up on Laminar or self-host an instance (github) and create a new project.
In the project settings, create and copy the API key.
In your .env
LMNR_PROJECT_API_KEY=...
In Next.js, Laminar initialization should be done in instrumentation.{ts,js}:
export async function register() {
// prevent this from running in the edge runtime
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { Laminar } = await import('@lmnr-ai/lmnr');
Laminar.initialize({
projectApiKey: process.env.LMNR_PROJECT_API_KEY,
});
}
}
In your next.config.js (.ts / .mjs), add the following lines:
const nextConfig = {
serverExternalPackages: ['@lmnr-ai/lmnr'],
};
export default nextConfig;
This is because Laminar depends on OpenTelemetry, which uses some Node.js-specific functionality, and we need to inform Next.js about it. Learn more in the Next.js docs.
Then, when you call AI SDK functions in any of your API routes, add the Laminar tracer to the experimental_telemetry option.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'What is Laminar flow?',
experimental_telemetry: {
isEnabled: true,
tracer: getTracer(),
},
});
This will create spans for ai.generateText. Laminar collects and displays the following information:
If you are using 13.4 ≤ Next.js < 15, you will also need to enable the experimental instrumentation hook. Place the following in your next.config.js:
module.exports = {
experimental: {
instrumentationHook: true,
},
};
For more information, see Laminar's Next.js guide and Next.js instrumentation docs. You can also learn how to enable all traces for Next.js in the docs.
@vercel/otelLaminar can live alongside @vercel/otel and trace AI SDK calls. The default Laminar setup will ensure that
@vercel/otel to your Telemetry backend configured with Vercel,import { registerOTel } from '@vercel/otel';
export async function register() {
registerOTel('my-service-name');
if (process.env.NEXT_RUNTIME === 'nodejs') {
const { Laminar } = await import('@lmnr-ai/lmnr');
// Make sure to initialize Laminar **after** `@registerOTel`
Laminar.initialize({
projectApiKey: process.env.LMNR_PROJECT_API_KEY,
});
}
}
For an advanced configuration that allows you to trace all Next.js traces via Laminar, see an example repo.
@sentry/nodeLaminar can live alongside @sentry/node and trace AI SDK calls. Make sure to initialize Laminar after Sentry.init.
This will ensure that
export async function register() {
if (process.env.NEXT_RUNTIME === 'nodejs') {
const Sentry = await import('@sentry/node');
const { Laminar } = await import('@lmnr-ai/lmnr');
Sentry.init({
dsn: process.env.SENTRY_DSN,
});
// Make sure to initialize Laminar **after** `Sentry.init`
Laminar.initialize({
projectApiKey: process.env.LMNR_PROJECT_API_KEY,
});
}
}
Then, initialize tracing in your application:
import { Laminar } from '@lmnr-ai/lmnr';
Laminar.initialize();
This must be done once in your application, as early as possible, but after other tracing libraries (e.g. @sentry/node) are initialized.
Read more in Laminar docs.
Then, when you call AI SDK functions in any of your API routes, add the Laminar tracer to the experimental_telemetry option.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'What is Laminar flow?',
experimental_telemetry: {
isEnabled: true,
tracer: getTracer(),
},
});
This will create spans for ai.generateText. Laminar collects and displays the following information:
@sentry/nodeLaminar can work with @sentry/node to trace AI SDK calls. Make sure to initialize Laminar after Sentry.init:
const Sentry = await import('@sentry/node');
const { Laminar } = await import('@lmnr-ai/lmnr');
Sentry.init({
dsn: process.env.SENTRY_DSN,
});
Laminar.initialize({
projectApiKey: process.env.LMNR_PROJECT_API_KEY,
});
This will ensure that
The two libraries allow for additional advanced configuration, but the default setup above is recommended.
If you want to override the default span name, you can set the functionId inside the telemetry option.
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: `Write a poem about Laminar flow.`,
experimental_telemetry: {
isEnabled: true,
tracer: getTracer(),
functionId: 'poem-writer',
},
});
If you want to trace not just the AI SDK calls, but also other functions in your application, you can use Laminar's observe wrapper.
import { getTracer, observe } from '@lmnr-ai/lmnr';
const result = await observe({ name: 'my-function' }, async () => {
// ... some work
await generateText({
//...
});
// ... some work
});
This will create a span with the name "my-function" and trace the function call. Inside it, you will see the nested ai.generateText spans.
To trace input arguments of the function that you wrap in observe, pass them to the wrapper as additional arguments. The return value of the function will be returned from the wrapper and traced as the span's output.
const result = await observe(
{ name: 'poem writer' },
async (topic: string, mood: string) => {
const { text } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: `Write a poem about ${topic} in ${mood} mood.`,
});
return text;
},
'Laminar flow',
'happy',
);
In Laminar, metadata is set on the trace level. Metadata contains key-value pairs and can be used to filter traces.
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: `Write a poem about Laminar flow.`,
experimental_telemetry: {
isEnabled: true,
tracer: getTracer(),
metadata: {
'my-key': 'my-value',
'another-key': 'another-value',
},
},
});
This is converted to Laminar's metadata and stored in the trace.
One of the reserved metadata keys is tags. It can be used to add tags to the span.
Tags can subsequently be used to filter traces in Laminar.
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: `Write a poem about Laminar flow.`,
experimental_telemetry: {
isEnabled: true,
tracer: getTracer(),
metadata: {
tags: ['fallback-model', 'api-handler'],
},
},
});
Traces in Laminar can be grouped into sessions or by user ID. These are also reserved metadata keys.
import { getTracer } from '@lmnr-ai/lmnr';
const { text } = await generateText({
model: openai('gpt-4.1-nano'),
prompt: `Write a poem about Laminar flow.`,
experimental_telemetry: {
isEnabled: true,
tracer: getTracer(),
metadata: {
sessionId: 'session-123',
userId: 'user-123',
},
},
});