cookbook/logging_observability/LiteLLM_Lunary.ipynb
https://docs.litellm.ai/docs/observability/langfuse_integration
%pip install litellm lunary
import litellm
from litellm import completion
import os
# from https://app.lunary.ai/
os.environ["LUNARY_PUBLIC_KEY"] = ""
# LLM provider keys
# You can use any of the litellm supported providers: https://docs.litellm.ai/docs/providers
os.environ['OPENAI_API_KEY'] = ""
# set langfuse as a callback, litellm will send the data to langfuse
litellm.success_callback = ["lunary"]
# openai call
response = completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
]
)
print(response)
You can use LiteLLM seamlessly with Lunary templates to manage your prompts and completions.
Assuming you have created a template "test-template" with a variable "question", you can use it like this:
import lunary
from litellm import completion
template = lunary.render_template("test-template", {"question": "Hello!"})
response = completion(**template)
print(response)