Back to Llama Index

BatchEvalRunner - Running Multiple Evaluations

docs/examples/evaluation/batch_eval.ipynb

0.14.213.5 KB
Original Source

BatchEvalRunner - Running Multiple Evaluations

The BatchEvalRunner class can be used to run a series of evaluations asynchronously. The async jobs are limited to a defined size of num_workers.

Setup

python
%pip install llama-index-llms-openai llama-index-embeddings-openai
python
# attach to the same event-loop
import nest_asyncio

nest_asyncio.apply()
python
import os
import openai

os.environ["OPENAI_API_KEY"] = "sk-..."
# openai.api_key = os.environ["OPENAI_API_KEY"]
python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader, Response
from llama_index.llms.openai import OpenAI
from llama_index.core.evaluation import (
    FaithfulnessEvaluator,
    RelevancyEvaluator,
    CorrectnessEvaluator,
)
from llama_index.core.node_parser import SentenceSplitter
import pandas as pd

pd.set_option("display.max_colwidth", 0)

Using GPT-4 here for evaluation

python
# gpt-4
gpt4 = OpenAI(temperature=0, model="gpt-4")

faithfulness_gpt4 = FaithfulnessEvaluator(llm=gpt4)
relevancy_gpt4 = RelevancyEvaluator(llm=gpt4)
correctness_gpt4 = CorrectnessEvaluator(llm=gpt4)
python
documents = SimpleDirectoryReader("./test_wiki_data/").load_data()
python
# create vector index
llm = OpenAI(temperature=0.3, model="gpt-3.5-turbo")
splitter = SentenceSplitter(chunk_size=512)
vector_index = VectorStoreIndex.from_documents(
    documents, transformations=[splitter]
)

Question Generation

To run evaluations in batch, you can create the runner and then call the .aevaluate_queries() function on a list of queries.

First, we can generate some questions and then run evaluation on them.

python
%pip install spacy datasets span-marker scikit-learn
python
from llama_index.core.evaluation import DatasetGenerator

dataset_generator = DatasetGenerator.from_documents(documents, llm=llm)

qas = dataset_generator.generate_dataset_from_nodes(num=3)

Running Batch Evaluation

Now, we can run our batch evaluation!

python
from llama_index.core.evaluation import BatchEvalRunner

runner = BatchEvalRunner(
    {"faithfulness": faithfulness_gpt4, "relevancy": relevancy_gpt4},
    workers=8,
)

eval_results = await runner.aevaluate_queries(
    vector_index.as_query_engine(llm=llm), queries=qas.questions
)

# If we had ground-truth answers, we could also include the correctness evaluator like below.
# The correctness evaluator depends on additional kwargs, which are passed in as a dictionary.
# Each question is mapped to a set of kwargs
#

# runner = BatchEvalRunner(
#     {"correctness": correctness_gpt4},
#     workers=8,
# )

# eval_results = await runner.aevaluate_queries(
#     vector_index.as_query_engine(),
#     queries=qas.queries,
#     reference=[qr[1] for qr in qas.qr_pairs],
# )
python
print(len([qr for qr in qas.qr_pairs]))

Inspecting Outputs

python
print(eval_results.keys())

print(eval_results["faithfulness"][0].dict().keys())

print(eval_results["faithfulness"][0].passing)
print(eval_results["faithfulness"][0].response)
print(eval_results["faithfulness"][0].contexts)

Reporting Total Scores

python
def get_eval_results(key, eval_results):
    results = eval_results[key]
    correct = 0
    for result in results:
        if result.passing:
            correct += 1
    score = correct / len(results)
    print(f"{key} Score: {score}")
    return score
python
score = get_eval_results("faithfulness", eval_results)
python
score = get_eval_results("relevancy", eval_results)