apps/opik-documentation/documentation/fern/docs/tracing/integrations/together-ai.mdx
Together AI provides fast inference for leading open-source models including Llama, Mistral, Qwen, and many others.
This guide explains how to integrate Opik with Together AI via LiteLLM. By using the LiteLLM integration provided by Opik, you can easily track and evaluate your Together AI calls within your Opik projects as Opik will automatically log the input prompt, model used, token usage, and response generated.
To start tracking your Together AI calls, you'll need to have both opik and litellm installed. You can install them using pip:
pip install opik litellm
In addition, you can configure Opik using the opik configure command which will prompt you for the correct local server address or if you are using the Cloud platform your API key:
opik configure
You'll need to set your Together AI API key as an environment variable:
export TOGETHER_API_KEY="YOUR_API_KEY"
In order to log the LLM calls to Opik, you will need to create the OpikLogger callback. Once the OpikLogger callback is created and added to LiteLLM, you can make calls to LiteLLM as you normally would:
from litellm.integrations.opik.opik import OpikLogger
import litellm
opik_logger = OpikLogger()
litellm.callbacks = [opik_logger]
response = litellm.completion(
model="together_ai/meta-llama/Llama-3.2-3B-Instruct-Turbo",
messages=[
{"role": "user", "content": "Why is tracking and evaluation of LLMs important?"}
]
)
If you are using LiteLLM within a function tracked with the @track decorator, you will need to pass the current_span_data as metadata to the litellm.completion call:
from opik import track, opik_context
import litellm
@track
def generate_story(prompt):
response = litellm.completion(
model="together_ai/meta-llama/Llama-3.2-3B-Instruct-Turbo",
messages=[{"role": "user", "content": prompt}],
metadata={
"opik": {
"current_span_data": opik_context.get_current_span_data(),
},
},
)
return response.choices[0].message.content
@track
def generate_topic():
prompt = "Generate a topic for a story about Opik."
response = litellm.completion(
model="together_ai/meta-llama/Llama-3.2-90B-Vision-Instruct-Turbo",
messages=[{"role": "user", "content": prompt}],
metadata={
"opik": {
"current_span_data": opik_context.get_current_span_data(),
},
},
)
return response.choices[0].message.content
@track
def generate_opik_story():
topic = generate_topic()
story = generate_story(topic)
return story
generate_opik_story()