apps/opik-documentation/documentation/fern/docs/tracing/integrations/llama_index.mdx
LlamaIndex is a flexible data framework for building LLM applications:
LlamaIndex is a "data framework" to help you build LLM apps. It provides the following tools:
- Offers data connectors to ingest your existing data sources and data formats (APIs, PDFs, docs, SQL, etc.).
- Provides ways to structure your data (indices, graphs) so that this data can be easily used with LLMs.
- Provides an advanced retrieval/query interface over your data: Feed in any LLM input prompt, get back retrieved context and knowledge-augmented output.
- Allows easy integrations with your outer application framework (e.g. with LangChain, Flask, Docker, ChatGPT, anything else).
Comet provides a hosted version of the Opik platform, simply create an account and grab your API Key.
You can also run the Opik platform locally, see the installation guide for more information.
To use the Opik integration with LlamaIndex, you'll need to have both the opik and llama_index packages installed. You can install them using pip:
pip install opik llama-index llama-index-agent-openai llama-index-llms-openai llama-index-callbacks-opik
Configure the Opik Python SDK for your deployment type. See the Python SDK Configuration guide for detailed instructions on:
opik configureopik.configure()In order to use LlamaIndex, you will need to configure your LLM provider API keys. For this example, we'll use OpenAI. You can find or create your API keys in these pages:
You can set them as environment variables:
export OPENAI_API_KEY="YOUR_API_KEY"
Or set them programmatically:
import os
import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
To use the Opik integration with LLamaIndex, you can use the set_global_handler function from the LlamaIndex package to set the global tracer:
from llama_index.core import global_handler, set_global_handler
set_global_handler("opik")
opik_callback_handler = global_handler
Now that the integration is set up, all the LlamaIndex runs will be traced and logged to Opik.
Alternatively, you can configure the callback handler directly for more control:
from llama_index.core import Settings
from llama_index.core.callbacks import CallbackManager
from opik.integrations.llama_index import LlamaIndexCallbackHandler
# Basic setup
opik_callback = LlamaIndexCallbackHandler()
# Or with optional parameters
opik_callback = LlamaIndexCallbackHandler(
project_name="my-llamaindex-project", # Set custom project name
skip_index_construction_trace=True # Skip tracking index construction
)
Settings.callback_manager = CallbackManager([opik_callback])
The skip_index_construction_trace parameter is useful when you want to track only query operations and not the index construction phase (particularly for large document sets or pre-built indexes)
To showcase the integration, we will create a new a query engine that will use Paul Graham's essays as the data source.
First step: Configure the Opik integration:
import os
from llama_index.core import global_handler, set_global_handler
# Set project name for better organization
os.environ["OPIK_PROJECT_NAME"] = "llamaindex-integration-demo"
set_global_handler("opik")
opik_callback_handler = global_handler
Second step: Download the example data:
import os
import requests
# Create directory if it doesn't exist
os.makedirs('./data/paul_graham/', exist_ok=True)
# Download the file using requests
url = 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt'
response = requests.get(url)
with open('./data/paul_graham/paul_graham_essay.txt', 'wb') as f:
f.write(response.content)
Third step:
Configure the OpenAI API key:
import os
import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")
Fourth step:
We can now load the data, create an index and query engine:
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
documents = SimpleDirectoryReader("./data/paul_graham").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
print(response)
Given that the integration with Opik has been set up, all the traces are logged to the Opik platform:
<Frame> </Frame>The LlamaIndex integration seamlessly works with Opik's @track decorator. When you call LlamaIndex operations inside a tracked function, the LlamaIndex traces will automatically be attached as child spans to your existing trace.
import opik
from llama_index.core import global_handler, set_global_handler
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
# Configure Opik integration
set_global_handler("opik")
opik_callback_handler = global_handler
@opik.track()
def my_llm_application(user_query: str):
"""Process user query with LlamaIndex"""
llm = OpenAI(model="gpt-3.5-turbo")
messages = [
ChatMessage(role="system", content="You are a helpful assistant."),
ChatMessage(role="user", content=user_query),
]
response = llm.chat(messages)
return response.message.content
# Call the tracked function
result = my_llm_application("What is the capital of France?")
print(result)
In this example, Opik will create a trace for the my_llm_application function, and all LlamaIndex operations (like the LLM chat call) will appear as nested spans within this trace, giving you a complete view of your application's execution.
You can also manually create traces using opik.start_as_current_trace() and have LlamaIndex operations nested within:
import opik
from llama_index.core import global_handler, set_global_handler
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
# Configure Opik integration
set_global_handler("opik")
opik_callback_handler = global_handler
# Create a manual trace
with opik.start_as_current_trace(name="user_query_processing"):
llm = OpenAI(model="gpt-3.5-turbo")
messages = [
ChatMessage(role="user", content="Explain quantum computing in simple terms"),
]
response = llm.chat(messages)
print(response.message.content)
This approach is useful when you want more control over trace naming and want to group multiple LlamaIndex operations under a single trace.
LlamaIndex workflows are multi-step processing pipelines for LLM applications. To track workflow executions in Opik, you can manually decorate your workflow steps and use opik.start_as_current_span() to wrap the workflow execution.
You can use @opik.track() to decorate your workflow steps and opik.start_as_current_span() to track the workflow execution:
import opik
from llama_index.core.workflow import Workflow, StartEvent, StopEvent, step, Event
from llama_index.core import Settings
from llama_index.core.callbacks import CallbackManager
from llama_index.core import global_handler, set_global_handler
# Configure Opik integration for LLM calls within steps
set_global_handler("opik")
class QueryEvent(Event):
"""Event for passing query through workflow."""
query: str
class MyRAGWorkflow(Workflow):
"""Simple RAG workflow with tracked steps."""
@step
@opik.track()
async def retrieve_context(self, ev: StartEvent) -> QueryEvent:
"""Retrieve relevant context for the query."""
query = ev.get("query", "")
# Your retrieval logic here
context = f"Context for: {query}"
return QueryEvent(query=f"{context} | {query}")
@step
@opik.track()
async def generate_response(self, ev: QueryEvent) -> StopEvent:
"""Generate final response using the context."""
# Your generation logic here
result = f"Response based on: {ev.query}"
return StopEvent(result=result)
# Create workflow instance
workflow = MyRAGWorkflow()
# Use start_as_current_span to track workflow execution
with opik.start_as_current_span(
name="rag_workflow_execution",
input={"query": "What are the key features?"},
project_name="llama-index-workflows"
) as span:
result = await workflow.run(query="What are the key features?")
span.update(output={"result": result})
print(result)
opik.flush_tracker() # Ensure all traces are sent
In this example:
@opik.track() to create spans@step decorator is placed before @opik.track() to ensure LlamaIndex can properly discover the workflow stepsopik.start_as_current_span() tracks the overall workflow execution@opik.track() to capture each step as a span@step before @opik.track() so LlamaIndex's workflow engine can properly discover and execute stepsopik.start_as_current_span() to wrap workflow execution - it works in both standalone and nested contextsopik.flush_tracker() at the end to ensure all traces are sentWhen using streaming chat responses with OpenAI models (e.g., llm.stream_chat()), you need to explicitly enable token usage tracking by configuring the stream_options parameter:
from llama_index.llms.openai import OpenAI
from llama_index.core.llms import ChatMessage
from llama_index.core import global_handler, set_global_handler
# Configure Opik integration
set_global_handler("opik")
# Configure OpenAI LLM with stream_options to include usage information
llm = OpenAI(
model="gpt-3.5-turbo",
additional_kwargs={
"stream_options": {"include_usage": True}
}
)
messages = [
ChatMessage(role="user", content="Tell me a short joke")
]
# Token usage will now be tracked in streaming responses
response = llm.stream_chat(messages)
for chunk in response:
print(chunk.delta, end="", flush=True)
The Opik integration with LlamaIndex automatically tracks token usage and cost for all supported LLM models used within LlamaIndex applications.
Cost information is automatically captured and displayed in the Opik UI, including: