content/providers/05-observability/helicone.mdx
Helicone is an open-source LLM observability platform that helps you monitor, analyze, and optimize your AI applications. Built-in observability tracks every request automatically, providing comprehensive insights into performance, costs, user behavior, and model usage without requiring additional instrumentation.
The Helicone provider is available in the @helicone/ai-sdk-provider package. Install it with:
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @helicone/ai-sdk-provider" dark /> </Tab> <Tab> <Snippet text="npm install @helicone/ai-sdk-provider" dark /> </Tab> <Tab> <Snippet text="yarn add @helicone/ai-sdk-provider" dark /> </Tab> <Tab> <Snippet text="bun add @helicone/ai-sdk-provider" dark /> </Tab> </Tabs>
Setting up Helicone:
Create a Helicone account at helicone.ai
Get your API key from the Helicone Dashboard
Set your API key as an environment variable:
HELICONE_API_KEY=your-helicone-api-key
Use Helicone in your application:
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
// Use the provider with any supported model: https://helicone.ai/models
const result = await generateText({
model: helicone('claude-4.5-haiku'),
prompt: 'Hello world',
});
console.log(result.text);
That's it! Your requests are now being logged and monitored through Helicone with automatic observability.
→ Learn more about Helicone AI Gateway
Helicone provides comprehensive observability for your AI applications with zero additional instrumentation:
Automatic Request Tracking
Analytics Dashboard
User & Session Analytics
Cost Monitoring
Debugging & Troubleshooting
→ Learn more about Helicone Observability
Track individual user behavior and analyze usage patterns across your application. This helps you understand which users are most active, identify power users, and monitor per-user costs:
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await generateText({
model: helicone('gpt-4o-mini', {
extraBody: {
helicone: {
userId: '[email protected]',
},
},
}),
prompt: 'Hello world',
});
What you can track:
→ Learn more about User Metrics
Add structured metadata to segment and analyze requests by feature, environment, or any custom dimension. This enables powerful filtering and insights in your analytics dashboard:
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await generateText({
model: helicone('gpt-4o-mini', {
extraBody: {
helicone: {
properties: {
feature: 'translation',
source: 'mobile-app',
language: 'French',
environment: 'production',
},
},
},
}),
prompt: 'Translate this text to French',
});
Use cases for custom properties:
→ Learn more about Custom Properties
Group related requests into sessions to analyze conversation flows and multi-turn interactions. This is essential for understanding user journeys and debugging complex conversations:
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await generateText({
model: helicone('gpt-4o-mini', {
extraBody: {
helicone: {
sessionId: 'convo-123',
sessionName: 'Travel Planning',
sessionPath: '/chats/travel',
},
},
}),
prompt: 'Tell me more about that',
});
Session tracking benefits:
Add tags to organize and filter requests in your analytics dashboard:
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await generateText({
model: helicone('gpt-4o-mini', {
extraBody: {
helicone: {
tags: ['customer-support', 'urgent'],
properties: {
ticketId: 'TICKET-789',
priority: 'high',
department: 'support',
},
},
},
}),
prompt: 'Help resolve this customer issue',
});
Tags insights:
→ Learn more about Helicone Features
Monitor streaming responses with full observability, including time-to-first-token and total streaming duration:
import { createHelicone } from '@helicone/ai-sdk-provider';
import { streamText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await streamText({
model: helicone('gpt-4o-mini', {
extraBody: {
helicone: {
userId: '[email protected]',
sessionId: 'stream-session-123',
tags: ['streaming', 'content-generation'],
},
},
}),
prompt: 'Write a short story about AI',
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
Streaming metrics tracked: