docs/examples/observability/LlamaDebugHandler.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/observability/LlamaDebugHandler.ipynb" target="_parent"></a>
Here we showcase the capabilities of our LlamaDebugHandler in logging events as we run queries within LlamaIndex.
NOTE: This is a beta feature. The usage within different classes and the API interface for the CallbackManager and LlamaDebugHandler may change!
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index-agent-openai
%pip install llama-index-llms-openai
!pip install llama-index
from llama_index.core.callbacks import (
CallbackManager,
LlamaDebugHandler,
CBEventType,
)
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
from llama_index.core import SimpleDirectoryReader
docs = SimpleDirectoryReader("./data/paul_graham/").load_data()
from llama_index.llms.openai import OpenAI
llm = OpenAI(model="gpt-3.5-turbo", temperature=0)
llama_debug = LlamaDebugHandler(print_trace_on_end=True)
callback_manager = CallbackManager([llama_debug])
from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(
docs, callback_manager=callback_manager
)
query_engine = index.as_query_engine()
response = query_engine.query("What did the author do growing up?")
The callback manager will log several start and end events for the following types:
The LlamaDebugHandler provides a few basic methods for exploring information about these events
# Print info on the LLM calls during the summary index query
print(llama_debug.get_event_time_info(CBEventType.LLM))
# Print info on llm inputs/outputs - returns start/end events for each LLM call
event_pairs = llama_debug.get_llm_inputs_outputs()
print(event_pairs[0][0])
print(event_pairs[0][1].payload.keys())
print(event_pairs[0][1].payload["response"])
# Get info on any event type
event_pairs = llama_debug.get_event_pairs(CBEventType.CHUNKING)
print(event_pairs[0][0].payload.keys()) # get first chunking start event
print(event_pairs[0][1].payload.keys()) # get first chunking end event
# Clear the currently cached events
llama_debug.flush_event_logs()