cookbook/logging_observability/LiteLLM_Proxy_Langfuse.ipynb
This notebook demonstrates how to use LiteLLM Proxy with Langfuse
In this notebook we will setup LiteLLM Proxy to make requests to OpenAI, Anthropic, Bedrock and automatically log traces to Langfuse.
Define .env variables on the container that litellm proxy is running on.
## LLM API Keys
OPENAI_API_KEY=sk-proj-1234567890
ANTHROPIC_API_KEY=sk-ant-api03-1234567890
AWS_ACCESS_KEY_ID=1234567890
AWS_SECRET_ACCESS_KEY=1234567890
## Langfuse Logging
LANGFUSE_PUBLIC_KEY="pk-lf-xxxx9"
LANGFUSE_SECRET_KEY="sk-lf-xxxx9"
LANGFUSE_HOST="https://us.cloud.langfuse.com"
model_list:
- model_name: gpt-4o
litellm_params:
model: openai/gpt-4o
api_key: os.environ/OPENAI_API_KEY
- model_name: claude-3-5-sonnet-20241022
litellm_params:
model: anthropic/claude-3-5-sonnet-20241022
api_key: os.environ/ANTHROPIC_API_KEY
- model_name: us.amazon.nova-micro-v1:0
litellm_params:
model: bedrock/us.amazon.nova-micro-v1:0
aws_access_key_id: os.environ/AWS_ACCESS_KEY_ID
aws_secret_access_key: os.environ/AWS_SECRET_ACCESS_KEY
litellm_settings:
callbacks: ["langfuse"]
Now we will make our first LLM request to LiteLLM Proxy
Set LITELLM_PROXY_BASE_URL to the base url of the LiteLLM Proxy and LITELLM_VIRTUAL_KEY to the virtual key you want to use for Authentication to LiteLLM Proxy. (Note: In this initial setup you can)
LITELLM_PROXY_BASE_URL="http://0.0.0.0:4000"
LITELLM_VIRTUAL_KEY="sk-oXXRa1xxxxxxxxxxx"
import openai
client = openai.OpenAI(
api_key=LITELLM_VIRTUAL_KEY,
base_url=LITELLM_PROXY_BASE_URL
)
response = client.chat.completions.create(
model="gpt-4o",
messages = [
{
"role": "user",
"content": "what is Langfuse?"
}
],
)
response
LiteLLM will send the request / response, model, tokens (input + output), cost to Langfuse.
Now we can call us.amazon.nova-micro-v1:0 and claude-3-5-sonnet-20241022 models defined on your config.yaml both in the OpenAI request / response format.
import openai
client = openai.OpenAI(
api_key=LITELLM_VIRTUAL_KEY,
base_url=LITELLM_PROXY_BASE_URL
)
response = client.chat.completions.create(
model="us.amazon.nova-micro-v1:0",
messages = [
{
"role": "user",
"content": "what is Langfuse?"
}
],
)
response
Here is an example of how you can set Langfuse specific params on your client side request. See full list of supported langfuse params here
You can view the logged trace of this request here
import openai
client = openai.OpenAI(
api_key=LITELLM_VIRTUAL_KEY,
base_url=LITELLM_PROXY_BASE_URL
)
response = client.chat.completions.create(
model="us.amazon.nova-micro-v1:0",
messages = [
{
"role": "user",
"content": "what is Langfuse?"
}
],
extra_body={
"metadata": {
"generation_id": "1234567890",
"trace_id": "567890",
"trace_user_id": "user_1234567890",
"tags": ["tag1", "tag2"]
}
}
)
response