content/providers/05-observability/langfuse.mdx
Langfuse (GitHub) is an open source LLM engineering platform that helps teams to collaboratively develop, monitor, and debug AI applications. Langfuse integrates with the AI SDK to provide:
The AI SDK supports tracing via OpenTelemetry. With the LangfuseSpanProcessor you can collect these traces in Langfuse.
While telemetry is experimental (docs), you can enable it by setting experimental_telemetry on each request that you want to trace.
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Write a short story about a cat.',
experimental_telemetry: { isEnabled: true },
});
To collect the traces in Langfuse, you need to add the LangfuseSpanProcessor to your application.
You can set the Langfuse credentials via environment variables or directly to the LangfuseSpanProcessor constructor.
To get your Langfuse API keys, you can self-host Langfuse or sign up for Langfuse Cloud here. Create a project in the Langfuse dashboard to get your secretKey and publicKey.
<Tabs items={["Environment Variables", "Constructor"]}>
<Tab>LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_BASEURL="https://cloud.langfuse.com" # πͺπΊ EU region, use "https://us.cloud.langfuse.com" for US region
import { LangfuseSpanProcessor } from '@langfuse/otel';
new LangfuseSpanProcessor({
secretKey: 'sk-lf-...',
publicKey: 'pk-lf-...',
baseUrl: 'https://cloud.langfuse.com', // πͺπΊ EU region
// baseUrl: "https://us.cloud.langfuse.com", // πΊπΈ US region
});
Now you need to register this span processor via the OpenTelemetry SDK.
<Tabs items={["Next.js","Node.js"]}> <Tab>
Next.js has support for OpenTelemetry instrumentation on the framework level. Learn more about it in the Next.js OpenTelemetry guide.
Install dependencies:
npm install @langfuse/otel @langfuse/tracing @opentelemetry/sdk-trace-node
Add LangfuseSpanProcessor to your instrumentation using a manual OpenTelemetry setup via NodeTracerProvider:
import { LangfuseSpanProcessor, ShouldExportSpan } from '@langfuse/otel';
import { NodeTracerProvider } from '@opentelemetry/sdk-trace-node';
// Optional: filter out Next.js infra spans
const shouldExportSpan: ShouldExportSpan = span => {
return span.otelSpan.instrumentationScope.name !== 'next.js';
};
export const langfuseSpanProcessor = new LangfuseSpanProcessor({
shouldExportSpan,
});
const tracerProvider = new NodeTracerProvider({
spanProcessors: [langfuseSpanProcessor],
});
tracerProvider.register();
Install dependencies:
npm install ai @ai-sdk/openai @langfuse/otel @opentelemetry/sdk-node
Add LangfuseSpanProcessor to your OpenTelemetry setup:
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { LangfuseSpanProcessor } from '@langfuse/otel';
import { NodeSDK } from '@opentelemetry/sdk-node';
const sdk = new NodeSDK({
spanProcessors: [new LangfuseSpanProcessor()],
});
sdk.start();
async function main() {
const result = await generateText({
model: openai('gpt-4o'),
maxOutputTokens: 50,
prompt: 'Invent a new holiday and describe its traditions.',
experimental_telemetry: {
isEnabled: true,
functionId: 'my-awesome-function',
metadata: {
something: 'custom',
someOtherThing: 'other-value',
},
},
});
console.log(result.text);
await sdk.shutdown(); // Flushes the trace to Langfuse
}
main().catch(console.error);
Done! All traces that contain AI SDK spans are automatically captured in Langfuse.
Check out the sample repository (langfuse/langfuse-vercel-ai-nextjs-example) based on the next-openai template to showcase the integration of Langfuse with Next.js and AI SDK.
You can open a Langfuse trace and pass the trace ID to AI SDK calls to group multiple execution spans under one trace. The passed name in functionId will be the root span name of the respective execution.
import { randomUUID } from 'crypto';
import { Langfuse } from 'langfuse';
const langfuse = new Langfuse();
const parentTraceId = randomUUID();
langfuse.trace({
id: parentTraceId,
name: 'holiday-traditions',
});
for (let i = 0; i < 3; i++) {
const result = await generateText({
model: openai('gpt-3.5-turbo'),
maxOutputTokens: 50,
prompt: 'Invent a new holiday and describe its traditions.',
experimental_telemetry: {
isEnabled: true,
functionId: `holiday-tradition-${i}`,
metadata: {
langfuseTraceId: parentTraceId,
langfuseUpdateParent: false, // Do not update the parent trace with execution results
},
},
});
console.log(result.text);
}
await langfuse.flushAsync();
await sdk.shutdown();
The resulting trace hierarchy will be:
By default, the exporter captures the input and output of each request. You can disable this behavior by setting the recordInputs and recordOutputs options to false.
You can link Langfuse prompts to AI SDK generations by setting the langfusePrompt property in the metadata field:
import { generateText } from 'ai';
import { Langfuse } from 'langfuse';
const langfuse = new Langfuse();
const fetchedPrompt = await langfuse.getPrompt('my-prompt');
const result = await generateText({
model: openai('gpt-4o'),
prompt: fetchedPrompt.prompt,
experimental_telemetry: {
isEnabled: true,
metadata: {
langfusePrompt: fetchedPrompt.toJSON(),
},
},
});
The resulting generation will have the prompt linked to the trace in Langfuse. Learn more about prompts in Langfuse here.
All of the metadata fields are automatically captured by the exporter. You can also pass custom trace attributes to e.g. track users or sessions.
const result = await generateText({
model: openai('gpt-4o'),
prompt: 'Write a short story about a cat.',
experimental_telemetry: {
isEnabled: true,
functionId: 'my-awesome-function', // Trace name
metadata: {
langfuseTraceId: 'trace-123', // Langfuse trace
tags: ['story', 'cat'], // Custom tags
userId: 'user-123', // Langfuse user
sessionId: 'session-456', // Langfuse session
foo: 'bar', // Any custom attribute recorded in metadata
},
},
});
"ai": "^3.3.0" to use the telemetry feature. In case of any issues, please update to the latest version.skipOpenTelemetrySetup: true in Sentry.init