content/providers/03-community-providers/19-helicone.mdx
The Helicone AI Gateway provides you with access to hundreds of AI models, as well as tracing and monitoring integrated directly through our observability platform.
Learn more about Helicone's capabilities in the Helicone Documentation.
The Helicone provider is available in the @helicone/ai-sdk-provider package. You can install it with:
<Tabs items={['pnpm', 'npm', 'yarn', 'bun']}> <Tab> <Snippet text="pnpm add @helicone/ai-sdk-provider" dark /> </Tab> <Tab> <Snippet text="npm install @helicone/ai-sdk-provider" dark /> </Tab> <Tab> <Snippet text="yarn add @helicone/ai-sdk-provider" dark /> </Tab> <Tab> <Snippet text="bun add @helicone/ai-sdk-provider" dark /> </Tab> </Tabs>
To get started with Helicone, use the createHelicone function to create a provider instance. Then query any model you like.
import { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await generateText({
model: helicone('claude-4.5-haiku'),
prompt: 'Write a haiku about artificial intelligence',
});
console.log(result.text);
You can obtain your Helicone API key from the Helicone Dashboard.
Here are examples of using Helicone with the AI SDK.
generateTextimport { createHelicone } from '@helicone/ai-sdk-provider';
import { generateText } from 'ai';
const helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const { text } = await generateText({
model: helicone('gemini-2.5-flash-lite'),
prompt: 'What is Helicone?',
});
console.log(text);
streamTextconst helicone = createHelicone({
apiKey: process.env.HELICONE_API_KEY,
});
const result = await streamText({
model: helicone('deepseek-v3.1-terminus'),
prompt: 'Write a short story about a robot learning to paint',
maxOutputTokens: 300,
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
console.log('\n\nStream completed!');
Helicone offers several advanced features to enhance your AI applications:
Model flexibility: Switch between hundreds of models without changing your code or managing multiple API keys.
Cost management: Manage costs per model in real-time through Helicone's LLM observability dashboard.
Observability: Access comprehensive analytics and logs for all your requests through Helicone's LLM observability dashboard.
Prompts management: Manage prompts and versioning through the Helicone dashboard.
Caching: Cache responses to reduce costs and latency.
Regular updates: Automatic access to new models and features as they become available.
For more information about these features and advanced configuration options, visit the Helicone Documentation.