apps/opik-documentation/documentation/fern/docs/tracing/integrations/novita-ai.mdx
Novita AI is an AI cloud platform that helps developers easily deploy AI models through a simple API, backed by affordable and reliable GPU cloud infrastructure. It provides access to a wide range of models including DeepSeek, Qwen, Llama, and other popular LLMs.
This guide explains how to integrate Opik with Novita AI via LiteLLM. By using the LiteLLM integration provided by Opik, you can easily track and evaluate your Novita AI API calls within your Opik projects as Opik will automatically log the input prompt, model used, token usage, and response generated.
To get started, you need to configure Opik to send traces to your Comet project. You can do this by setting the OPIK_PROJECT_NAME environment variable:
export OPIK_PROJECT_NAME="your-project-name"
export OPIK_WORKSPACE="your-workspace-name"
You can also call the opik.configure method:
import opik
opik.configure(
project_name="your-project-name",
workspace="your-workspace-name",
)
Install the required packages:
pip install opik litellm
Create a LiteLLM configuration file (e.g., litellm_config.yaml):
model_list:
- model_name: deepseek-r1-turbo
litellm_params:
model: novita/deepseek/deepseek-r1-turbo
api_key: os.environ/NOVITA_API_KEY
- model_name: qwen-32b-fp8
litellm_params:
model: novita/qwen/qwen3-32b-fp8
api_key: os.environ/NOVITA_API_KEY
- model_name: llama-70b-instruct
litellm_params:
model: novita/meta-llama/llama-3.1-70b-instruct
api_key: os.environ/NOVITA_API_KEY
litellm_settings:
callbacks: ["opik"]
Set your Novita AI API key as an environment variable:
export NOVITA_API_KEY="your-novita-api-key"
You can obtain a Novita AI API key from the Novita AI dashboard.
Start the LiteLLM proxy server:
litellm --config litellm_config.yaml
Use the proxy server to make requests:
import openai
client = openai.OpenAI(
api_key="anything", # can be anything
base_url="http://0.0.0.0:4000"
)
response = client.chat.completions.create(
model="deepseek-r1-turbo",
messages=[
{"role": "user", "content": "What are the advantages of using cloud-based AI platforms?"}
]
)
print(response.choices[0].message.content)
You can also use LiteLLM directly in your Python code:
import os
from litellm import completion
# Configure Opik
import opik
opik.configure()
# Configure LiteLLM for Opik
from litellm.integrations.opik.opik import OpikLogger
import litellm
litellm.callbacks = ["opik"]
os.environ["NOVITA_API_KEY"] = "your-novita-api-key"
response = completion(
model="novita/deepseek/deepseek-r1-turbo",
messages=[
{"role": "user", "content": "How can cloud AI platforms improve development efficiency?"}
]
)
print(response.choices[0].message.content)
Novita AI provides access to a comprehensive catalog of models from leading providers. Some of the popular models available include:
deepseek-r1-turbo, deepseek-v3-turbo, deepseek-v3-0324qwen3-235b-a22b-fp8, qwen3-30b-a3b-fp8, qwen3-32b-fp8llama-4-maverick-17b-128e-instruct-fp8, llama-3.3-70b-instruct, llama-3.1-70b-instructmistral-nemogemma-3-27b-itFor the complete list of available models, visit the Novita AI model catalog.
Novita AI supports function calling with compatible models:
from litellm import completion
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
}
]
response = completion(
model="novita/deepseek/deepseek-r1-turbo",
messages=[{"role": "user", "content": "What's the weather like in Boston today?"}],
tools=tools,
)
For structured outputs, you can enable JSON mode:
response = completion(
model="novita/deepseek/deepseek-r1-turbo",
messages=[
{"role": "user", "content": "List 5 popular cookie recipes."}
],
response_format={"type": "json_object"}
)
Once your Novita AI calls are logged with Opik, you can evaluate your LLM application using Opik's evaluation framework:
from opik.evaluation import evaluate
from opik.evaluation.metrics import Hallucination
# Define your evaluation task
def evaluation_task(x):
return {
"message": x["message"],
"output": x["output"],
"reference": x["reference"]
}
# Create the Hallucination metric
hallucination_metric = Hallucination()
# Run the evaluation
evaluation_results = evaluate(
experiment_name="novita-ai-evaluation",
dataset=your_dataset,
task=evaluation_task,
scoring_metrics=[hallucination_metric],
project_name="my-project",
)
Make sure to set the following environment variables:
# Novita AI Configuration
export NOVITA_API_KEY="your-novita-api-key"
# Opik Configuration
export OPIK_PROJECT_NAME="your-project-name"
export OPIK_WORKSPACE="your-workspace-name"