apps/opik-documentation/documentation/fern/docs/tracing/integrations/xai-grok.mdx
xAI is an AI company founded by Elon Musk that develops the Grok series of large language models. Grok models are designed to have access to real-time information and are built with a focus on truthfulness, competence, and maximum benefit to humanity.
This guide explains how to integrate Opik with xAI Grok via LiteLLM. By using the LiteLLM integration provided by Opik, you can easily track and evaluate your xAI API calls within your Opik projects as Opik will automatically log the input prompt, model used, token usage, and response generated.
To get started, you need to configure Opik to send traces to your Comet project. You can do this by setting the OPIK_PROJECT_NAME environment variable:
export OPIK_PROJECT_NAME="your-project-name"
export OPIK_WORKSPACE="your-workspace-name"
You can also call the opik.configure method:
import opik
opik.configure(
project_name="your-project-name",
workspace="your-workspace-name",
)
Install the required packages:
pip install opik litellm
Create a LiteLLM configuration file (e.g., litellm_config.yaml):
model_list:
- model_name: grok-beta
litellm_params:
model: xai/grok-beta
api_key: os.environ/XAI_API_KEY
- model_name: grok-vision-beta
litellm_params:
model: xai/grok-vision-beta
api_key: os.environ/XAI_API_KEY
litellm_settings:
callbacks: ["opik"]
Set your xAI API key as an environment variable:
export XAI_API_KEY="your-xai-api-key"
You can obtain an xAI API key from the xAI Console.
Start the LiteLLM proxy server:
litellm --config litellm_config.yaml
Use the proxy server to make requests:
import openai
client = openai.OpenAI(
api_key="anything", # can be anything
base_url="http://0.0.0.0:4000"
)
response = client.chat.completions.create(
model="grok-beta",
messages=[
{"role": "user", "content": "What are the latest developments in AI technology?"}
]
)
print(response.choices[0].message.content)
You can also use LiteLLM directly in your Python code:
import os
from litellm import completion
# Configure Opik
import opik
opik.configure()
# Configure LiteLLM for Opik
from litellm.integrations.opik.opik import OpikLogger
import litellm
litellm.callbacks = ["opik"]
os.environ["XAI_API_KEY"] = "your-xai-api-key"
response = completion(
model="xai/grok-beta",
messages=[
{"role": "user", "content": "What is the current state of renewable energy adoption worldwide?"}
]
)
print(response.choices[0].message.content)
xAI provides access to several Grok model variants:
grok-beta - The main conversational AI model with real-time information accessgrok-vision-beta - Multimodal model capable of processing text and imagesgrok-mini - A smaller, faster variant optimized for simpler tasksFor the most up-to-date list of available models, visit the xAI API documentation.
One of Grok's key features is its ability to access real-time information. This makes it particularly useful for questions about current events:
response = completion(
model="xai/grok-beta",
messages=[
{"role": "user", "content": "What are the latest news headlines today?"}
]
)
print(response.choices[0].message.content)
Grok Vision Beta can process both text and images:
from litellm import completion
response = completion(
model="xai/grok-vision-beta",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What do you see in this image?"},
{"type": "image_url", "image_url": {"url": "data:image/jpeg;base64,..."}}
]
}
]
)
print(response.choices[0].message.content)
Grok models support function calling for enhanced capabilities:
tools = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current time in a specific timezone",
"parameters": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "The timezone to get the time for",
}
},
"required": ["timezone"],
},
},
}
]
response = completion(
model="xai/grok-beta",
messages=[{"role": "user", "content": "What time is it in Tokyo right now?"}],
tools=tools,
)
Control the creativity of Grok's responses:
# More creative responses
response = completion(
model="xai/grok-beta",
messages=[{"role": "user", "content": "Write a creative story about space exploration"}],
temperature=0.9,
max_tokens=1000
)
# More factual responses
response = completion(
model="xai/grok-beta",
messages=[{"role": "user", "content": "Explain quantum computing"}],
temperature=0.1,
max_tokens=500
)
Use system messages to guide Grok's behavior:
response = completion(
model="xai/grok-beta",
messages=[
{"role": "system", "content": "You are a helpful scientific advisor. Provide accurate, evidence-based information."},
{"role": "user", "content": "What are the current challenges in fusion energy research?"}
]
)
Once your xAI calls are logged with Opik, you can evaluate your LLM application using Opik's evaluation framework:
from opik.evaluation import evaluate
from opik.evaluation.metrics import Hallucination
# Define your evaluation task
def evaluation_task(x):
return {
"message": x["message"],
"output": x["output"],
"reference": x["reference"]
}
# Create the Hallucination metric
hallucination_metric = Hallucination()
# Run the evaluation
evaluation_results = evaluate(
experiment_name="xai-grok-evaluation",
dataset=your_dataset,
task=evaluation_task,
scoring_metrics=[hallucination_metric],
project_name="my-project",
)
Make sure to set the following environment variables:
# xAI Configuration
export XAI_API_KEY="your-xai-api-key"
# Opik Configuration
export OPIK_PROJECT_NAME="your-project-name"
export OPIK_WORKSPACE="your-workspace-name"