examples/provider-litellm/README.md
You can run this example with:
npx promptfoo@latest init --example provider-litellm
cd provider-litellm
This example demonstrates how to use the LiteLLM provider with promptfoo to evaluate multiple models through a unified interface.
LiteLLM provides a unified interface to 400+ LLMs. Instead of managing different APIs and authentication methods for each provider, you can use a single interface to access models from OpenAI, Anthropic, Google, and many more.
Set your API keys:
export OPENAI_API_KEY=your-openai-key
# Optional: Add other providers
export ANTHROPIC_API_KEY=your-anthropic-key
export GOOGLE_AI_API_KEY=your-google-key
Start the LiteLLM proxy:
# Use the provided script
./start-proxy.sh
# Or manually:
pip install litellm[proxy]
litellm --model gpt-4.1 --model claude-sonnet-4-6 --model gemini-2.5-pro --model text-embedding-3-large
Run the evaluation:
npx promptfoo@latest eval
The LiteLLM provider in promptfoo connects to a LiteLLM proxy server (default port 4000). The proxy handles:
promptfooconfig.yaml - Main evaluation configurationlitellm_config.yaml - LiteLLM proxy server configurationstart-proxy.sh - Helper script to start the proxyThe example evaluates translation and creative writing tasks across three different providers:
providers:
- litellm:gpt-4.1
- litellm:claude-sonnet-4-6
- litellm:gemini-2.5-pro
defaultTest:
options:
provider:
embedding: litellm:embedding:text-embedding-3-large
./start-proxy.shCheck proxy is running:
curl http://localhost:4000/health
Verify API keys:
echo $OPENAI_API_KEY
If your LiteLLM proxy runs on a different host or port:
providers:
- id: litellm:gpt-4.1
config:
apiBaseUrl: https://your-litellm-server.com
For more complex setups, use the config file:
litellm --config litellm_config.yaml