apps/opik-documentation/documentation/fern/docs-v2/integrations/anannas.mdx
Anannas AI is a unified inference gateway providing access to 500+ models (OpenAI, Anthropic, Mistral, Gemini, DeepSeek, and more) through an OpenAI-compatible API.
Anannas AI provides a unified interface for accessing hundreds of LLM models through a single OpenAI-compatible API, making it easy to switch between providers and models without changing your code.
Key Features:
Comet provides a hosted version of the Opik platform. Simply create an account and grab your API Key.
You can also run the Opik platform locally, see the installation guide for more information.
First, ensure you have both opik and openai packages installed:
pip install opik openai
Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:
opik configureopik.configure()You'll need an Anannas AI API key. You can get one from Anannas AI.
Set it as an environment variable:
export ANANNAS_API_KEY="YOUR_ANANNAS_API_KEY"
Or set it programmatically:
import os
import getpass
if "ANANNAS_API_KEY" not in os.environ:
os.environ["ANANNAS_API_KEY"] = getpass.getpass("Enter your Anannas AI API key: ")
Since Anannas AI provides an OpenAI-compatible API, we can use the Opik OpenAI SDK wrapper to automatically log Anannas AI calls as generations in Opik.
import os
from opik.integrations.openai import track_openai
from openai import OpenAI
# Create an OpenAI client with Anannas AI's base URL
client = OpenAI(
api_key=os.environ["ANANNAS_API_KEY"],
base_url="https://api.anannas.ai/v1"
)
# Wrap the client with Opik tracking
client = track_openai(client, project_name="anannas-integration-demo")
# Make a chat completion request
response = client.chat.completions.create(
model="anthropic/claude-3-5-sonnet", # You can use any of the 500+ models
messages=[
{"role": "system", "content": "You are a knowledgeable AI assistant."},
{"role": "user", "content": "What are some interesting facts about tropical fruits?"}
]
)
# Print the assistant's reply
print(response.choices[0].message.content)
@track decoratorIf you have multiple steps in your LLM pipeline, you can use the @track decorator to log the traces for each step. If Anannas AI is called within one of these steps, the LLM call will be associated with that corresponding step:
import os
from opik import track
from opik.integrations.openai import track_openai
from openai import OpenAI
# Create and wrap the OpenAI client with Anannas AI's base URL
client = OpenAI(
api_key=os.environ["ANANNAS_API_KEY"],
base_url="https://api.anannas.ai/v1"
)
client = track_openai(client)
@track
def summarize_text(text: str):
response = client.chat.completions.create(
model="openai/gpt-4o",
messages=[
{"role": "system", "content": "You create concise summaries of text content."},
{"role": "user", "content": f"Please summarize this text:\n{text}"}
]
)
return response.choices[0].message.content
@track
def analyze_sentiment(summary: str):
response = client.chat.completions.create(
model="openai/gpt-4o-mini",
messages=[
{"role": "system", "content": "You perform sentiment analysis on text."},
{"role": "user", "content": f"What is the sentiment of this summary:\n{summary}"}
]
)
return response.choices[0].message.content
@track(project_name="anannas-integration-demo")
def analyze_text(text: str):
# First LLM call: Summarize the text
summary = summarize_text(text)
# Second LLM call: Analyze the sentiment of the summary
sentiment = analyze_sentiment(summary)
return {
"summary": summary,
"sentiment": sentiment
}
# Example usage
text_to_analyze = "Anannas AI provides a unified gateway to access hundreds of LLM models with built-in observability and automatic fallback routing."
result = analyze_text(text_to_analyze)
The trace will show nested LLM calls with hierarchical spans.
If you have suggestions for improving the Anannas AI integration, please let us know by opening an issue on GitHub.