examples/integration-helicone/README.md
You can run this example with:
npx promptfoo@latest init --example integration-helicone
cd integration-helicone
This example demonstrates how to use the Helicone AI Gateway provider in promptfoo to route requests through a self-hosted Helicone AI Gateway instance for unified provider access.
Set Environment Variables:
# Set your provider API keys
export OPENAI_API_KEY=your_openai_api_key_here
export ANTHROPIC_API_KEY=your_anthropic_api_key_here # Optional
export GROQ_API_KEY=your_groq_api_key_here # Optional
Start Helicone AI Gateway:
# In a separate terminal, start the gateway
npx @helicone/ai-gateway@latest
The gateway will start on http://localhost:8080 by default.
Install promptfoo (if you haven't already):
npm install -g promptfoo
From this directory, run:
promptfoo eval
This will:
http://localhost:8080The Helicone AI Gateway provides several powerful features:
The example configuration includes:
providers:
- id: helicone:openai/gpt-4o-mini
label: 'OpenAI via Helicone Gateway'
config:
temperature: 0.7
max_tokens: 500
helicone:provider/model formatYou can modify the configuration to:
provider/model formatproviders:
- id: helicone:openai/gpt-4o
config:
baseUrl: http://my-gateway.company.com:8080
router: production
temperature: 0.5
Route to different environments using routers:
providers:
- id: helicone:openai/gpt-4o
config:
router: production
- id: helicone:openai/gpt-4o-mini
config:
router: development
If you're running your own Helicone AI Gateway with custom configuration:
providers:
- id: helicone:custom-provider/custom-model
config:
baseUrl: http://localhost:9000
headers:
Custom-Header: value
HELICONE_API_KEY is correctFor detailed request logging:
LOG_LEVEL=debug promptfoo eval