apps/opik-documentation/documentation/fern/docs/tracing/integrations/gemini.mdx
Gemini is a family of multimodal large language models developed by Google DeepMind.
Opik also supports Google VertexAI, Google's fully-managed AI development platform that provides access to Gemini models through the google-genai package. When using VertexAI, you can leverage the same track_genai wrapper with the google-genai client configured for VertexAI, allowing you to trace and monitor your Gemini model calls whether you're using the direct Google AI API or through VertexAI's enterprise platform.
Comet provides a hosted version of the Opik platform, simply create an account and grab your API Key.
You can also run the Opik platform locally, see the installation guide for more information.
First, ensure you have both opik and google-genai packages installed:
pip install opik google-genai
Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:
opik configureopik.configure()In order to configure Gemini, you will need to have your Gemini API Key. See the following documentation page how to retrieve it.
You can set it as an environment variable:
export GOOGLE_API_KEY="YOUR_API_KEY"
Or set it programmatically:
import os
import getpass
if "GOOGLE_API_KEY" not in os.environ:
os.environ["GOOGLE_API_KEY"] = getpass.getpass("Enter your Gemini API key: ")
In order to log the LLM calls to Opik, you will need to wrap the Gemini client with track_genai. When making calls with that wrapped client, all calls will be logged to Opik:
from google import genai
from opik.integrations.genai import track_genai
os.environ["OPIK_PROJECT_NAME"] = "gemini-integration-demo"
client = genai.Client()
gemini_client = track_genai(client)
prompt = """
Write a short two sentence story about Opik.
"""
response = gemini_client.models.generate_content(
model="gemini-2.0-flash-001", contents=prompt
)
print(response.text)
To use Opik with VertexAI, configure the google-genai client for VertexAI and wrap it with track_genai:
from google import genai
from opik.integrations.genai import track_genai
# Configure for VertexAI
PROJECT_ID = "your-project-id"
LOCATION = "us-central1"
client = genai.Client(vertexai=True, project=PROJECT_ID, location=LOCATION)
vertexai_client = track_genai(client)
# Set project name for organization
os.environ["OPIK_PROJECT_NAME"] = "vertexai-integration-demo"
# Use the wrapped client
response = vertexai_client.models.generate_content(
model="gemini-2.0-flash-001",
contents="Write a short story about AI observability."
)
print(response.text)
@track decoratorIf you have multiple steps in your LLM pipeline, you can use the @track decorator to log the traces for each step. If Gemini is called within one of these steps, the LLM call will be associated with that corresponding step:
from opik import track
@track
def generate_story(prompt):
response = gemini_client.models.generate_content(
model="gemini-2.0-flash-001", contents=prompt
)
return response.text
@track
def generate_topic():
prompt = "Generate a topic for a story about Opik."
response = gemini_client.models.generate_content(
model="gemini-2.0-flash-001", contents=prompt
)
return response.text
@track
def generate_opik_story():
topic = generate_topic()
story = generate_story(topic)
return story
# Execute the multi-step pipeline
generate_opik_story()
The trace can now be viewed in the UI with hierarchical spans showing the relationship between different steps:
<Frame> </Frame>The track_genai wrapper automatically logs multimodal content parts (images, audio, video) as attachments in your traces. When you send images or other media to Gemini models, they are captured and viewable directly in the Opik UI alongside your trace data.
This makes it easy to:
The track_genai wrapper also supports Google's Veo video generation API. When you generate videos, Opik automatically tracks the video creation process and logs the generated video as an attachment when you save it.
import os
import time
import opik
from opik import track, opik_context
from opik.integrations.genai import track_genai
import google.genai as genai
from google.genai.types import HttpOptions, GenerateVideosConfig
os.environ["OPIK_PROJECT_NAME"] = "genai-video-demo"
# Configure for VertexAI (required for Veo)
client = genai.Client(
vertexai=True,
http_options=HttpOptions(api_version="v1"),
)
genai_client = track_genai(client)
@track
def generate_video(
prompt: str,
number_of_videos: int = 1,
duration_seconds: int = 4,
resolution: str = "720p",
generate_audio: bool = False,
) -> dict:
"""Generate a video using Google's Veo model."""
# Create video
operation = genai_client.models.generate_videos(
model="veo-3.1-fast-generate-preview",
prompt=prompt,
config=GenerateVideosConfig(
duration_seconds=duration_seconds,
resolution=resolution,
generate_audio=generate_audio,
number_of_videos=number_of_videos,
),
)
# Wait for completion
with opik.start_as_current_span(name="wait_for_completion") as span:
while not operation.done:
time.sleep(10)
operation = genai_client.operations.get(operation)
result = {"name": operation.name, "done": operation.done}
opik_context.update_current_span(output=result)
# Download all videos if generation succeeded
if operation.response and operation.response.generated_videos:
output_paths = []
for i, generated_video in enumerate(operation.response.generated_videos):
output_path = f"output_video_{i}.mp4"
generated_video.video.save(output_path)
output_paths.append(output_path)
result["output_paths"] = output_paths
return result
# Generate videos
generate_video("A golden retriever playing in the snow", number_of_videos=2)
The trace will show the full video generation workflow including the video creation, polling, and the generated video as an attachment:
<Frame> </Frame>The track_genai wrapper automatically tracks token usage and cost for all supported Google AI models.
Cost information is automatically captured and displayed in the Opik UI, including: