content/providers/05-observability/langwatch.mdx
LangWatch (GitHub) is an LLM Ops platform for monitoring, experimenting, measuring and improving LLM pipelines, with a fair-code distribution model.
Obtain your LANGWATCH_API_KEY from the LangWatch dashboard.
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add langwatch" dark /> </Tab> <Tab> <Snippet text="npm install langwatch" dark /> </Tab> <Tab> <Snippet text="yarn add langwatch" dark /> </Tab>
<Tab> <Snippet text="bun add langwatch" dark /> </Tab> </Tabs>Ensure LANGWATCH_API_KEY is set:
<Tabs items={["Environment variables", "Client parameters"]} >
<Tab title="Environment variable">LANGWATCH_API_KEY='your_api_key_here'
import { LangWatch } from 'langwatch';
const langwatch = new LangWatch({
apiKey: 'your_api_key_here',
});
thread_id in their metadata, making the individual messages become part of a conversation.
user_id metadata to track user analytics.The AI SDK supports tracing via Next.js OpenTelemetry integration. By using the LangWatchExporter, you can automatically collect those traces to LangWatch.
First, you need to install the necessary dependencies:
npm install @vercel/otel langwatch @opentelemetry/api-logs @opentelemetry/instrumentation @opentelemetry/sdk-logs
Then, set up the OpenTelemetry for your application, follow one of the tabs below depending whether you are using AI SDK with Next.js or on Node.js:
<Tabs items={['Next.js', 'Node.js']}> <Tab title="Next.js">
You need to enable the instrumentationHook in your next.config.js file if you haven't already:
/** @type {import('next').NextConfig} */
const nextConfig = {
experimental: {
instrumentationHook: true,
},
};
module.exports = nextConfig;
Next, you need to create a file named instrumentation.ts (or .js) in the root directory of the project (or inside src folder if using one), with LangWatchExporter as the traceExporter:
import { registerOTel } from '@vercel/otel';
import { LangWatchExporter } from 'langwatch';
export function register() {
registerOTel({
serviceName: 'next-app',
traceExporter: new LangWatchExporter(),
});
}
(Read more about Next.js OpenTelemetry configuration on the official guide)
Finally, enable experimental_telemetry tracking on the AI SDK calls you want to trace:
const result = await generateText({
model: openai('gpt-4o-mini'),
prompt:
'Explain why a chicken would make a terrible astronaut, be creative and humorous about it.',
experimental_telemetry: {
isEnabled: true,
// optional metadata
metadata: {
userId: 'myuser-123',
threadId: 'mythread-123',
},
},
});
Once you have set up OpenTelemetry, you can use the LangWatchExporter to automatically send your traces to LangWatch:
import { LangWatchExporter } from 'langwatch';
const sdk = new NodeSDK({
traceExporter: new LangWatchExporter({
apiKey: process.env.LANGWATCH_API_KEY,
}),
// ...
});
That's it! Your messages will now be visible on LangWatch:
You can find a full example project with a more complex pipeline and AI SDK and LangWatch integration on our GitHub.
The docs from here below are for manual integration, in case you are not using the AI SDK OpenTelemetry integration, you can manually start a trace to capture your messages:
import { LangWatch } from 'langwatch';
const langwatch = new LangWatch();
const trace = langwatch.getTrace({
metadata: { threadId: 'mythread-123', userId: 'myuser-123' },
});
Then, you can start an LLM span inside the trace with the input about to be sent to the LLM.
const span = trace.startLLMSpan({
name: 'llm',
model: model,
input: {
type: 'chat_messages',
value: messages,
},
});
This will capture the LLM input and register the time the call started. Once the LLM call is done, end the span to get the finish timestamp to be registered, and capture the output and the token metrics, which will be used for cost calculation, e.g.:
span.end({
output: {
type: 'chat_messages',
value: [chatCompletion.choices[0]!.message],
},
metrics: {
promptTokens: chatCompletion.usage?.prompt_tokens,
completionTokens: chatCompletion.usage?.completion_tokens,
},
});
For more information and examples, you can read more below:
If you have questions or need help, join our community: