docs/examples/cookbooks/cohere_retriever_eval.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/cookbooks/cohere_retriever_eval.ipynb" target="_parent"></a>
Cohere Embed is the first embedding model that natively supports float, int8, binary and ubinary embeddings. Refer to their main blog post for more details on Cohere int8 & binary Embeddings.
This notebook helps you to evaluate these different embedding types and pick one for your RAG pipeline. It uses our RetrieverEvaluator to evaluate the quality of the embeddings using the Retriever module LlamaIndex.
Observed Metrics:
For any given question, these will compare the quality of retrieved results from the ground-truth context. The eval dataset is created using our synthetic dataset generation module. We will use GPT-4 for dataset generation to avoid bias.
%pip install llama-index-llms-openai
%pip install llama-index-embeddings-cohere
import os
os.environ["OPENAI_API_KEY"] = "YOUR OPENAI KEY"
os.environ["COHERE_API_KEY"] = "YOUR COHEREAI API KEY"
Here we load in data (PG essay), parse into Nodes. We then index this data using our simple vector index and get a retriever for the following different embedding types.
floatint8binaryubinaryimport nest_asyncio
nest_asyncio.apply()
from llama_index.core.evaluation import generate_question_context_pairs
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.core.node_parser import SentenceSplitter
from llama_index.llms.openai import OpenAI
from llama_index.embeddings.cohere import CohereEmbedding
!mkdir -p 'data/paul_graham/'
!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'
documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
node_parser = SentenceSplitter(chunk_size=512)
nodes = node_parser.get_nodes_from_documents(documents)
# by default, the node ids are set to random uuids. To ensure same id's per run, we manually set them.
for idx, node in enumerate(nodes):
node.id_ = f"node_{idx}"
# llm for question generation
# Take any other llm other than from cohereAI to avoid bias.
llm = OpenAI(model="gpt-4")
# Function to return embedding model
def cohere_embedding(
model_name: str, input_type: str, embedding_type: str
) -> CohereEmbedding:
return CohereEmbedding(
api_key=os.environ["COHERE_API_KEY"],
model_name=model_name,
input_type=input_type,
embedding_type=embedding_type,
)
# Function to return retriver for different embedding type embedding model
def retriver(nodes, embedding_type="float", model_name="embed-english-v3.0"):
vector_index = VectorStoreIndex(
nodes,
embed_model=cohere_embedding(
model_name, "search_document", embedding_type
),
)
retriever = vector_index.as_retriever(
similarity_top_k=2,
embed_model=cohere_embedding(
model_name, "search_query", embedding_type
),
)
return retriever
# Build retriever for float embedding type
retriver_float = retriver(nodes)
# Build retriever for int8 embedding type
retriver_int8 = retriver(nodes, "int8")
# Build retriever for binary embedding type
retriver_binary = retriver(nodes, "binary")
# Build retriever for ubinary embedding type
retriver_ubinary = retriver(nodes, "ubinary")
We'll try out retrieval over a sample query with float retriever.
retrieved_nodes = retriver_float.retrieve("What did the author do growing up?")
from llama_index.core.response.notebook_utils import display_source_node
for node in retrieved_nodes:
display_source_node(node, source_length=1000)
Here we build a simple evaluation dataset over the existing text corpus.
We use our generate_question_context_pairs to generate a set of (question, context) pairs over a given unstructured text corpus. This uses the LLM to auto-generate questions from each context chunk.
We get back a EmbeddingQAFinetuneDataset object. At a high-level this contains a set of ids mapping to queries and relevant doc chunks, as well as the corpus itself.
from llama_index.core.evaluation import (
generate_question_context_pairs,
EmbeddingQAFinetuneDataset,
)
qa_dataset = generate_question_context_pairs(
nodes, llm=llm, num_questions_per_chunk=2
)
queries = qa_dataset.queries.values()
print(list(queries)[0])
# [optional] save
qa_dataset.save_json("pg_eval_dataset.json")
# [optional] load
qa_dataset = EmbeddingQAFinetuneDataset.from_json("pg_eval_dataset.json")
RetrieverEvaluator for Retrieval EvaluationWe're now ready to run our retrieval evals. We'll run our RetrieverEvaluator over the eval dataset that we generated.
RetrieverEvaluator for different embedding_typesfrom llama_index.core.evaluation import RetrieverEvaluator
metrics = ["mrr", "hit_rate"]
# Retrieval evaluator for float embedding type
retriever_evaluator_float = RetrieverEvaluator.from_metric_names(
metrics, retriever=retriver_float
)
# Retrieval evaluator for int8 embedding type
retriever_evaluator_int8 = RetrieverEvaluator.from_metric_names(
metrics, retriever=retriver_int8
)
# Retrieval evaluator for binary embedding type
retriever_evaluator_binary = RetrieverEvaluator.from_metric_names(
metrics, retriever=retriver_binary
)
# Retrieval evaluator for ubinary embedding type
retriever_evaluator_ubinary = RetrieverEvaluator.from_metric_names(
metrics, retriever=retriver_ubinary
)
# try it out on a sample query
sample_id, sample_query = list(qa_dataset.queries.items())[0]
sample_expected = qa_dataset.relevant_docs[sample_id]
eval_result = retriever_evaluator_float.evaluate(sample_query, sample_expected)
print(eval_result)
# Evaluation on the entire dataset
# float embedding type
eval_results_float = await retriever_evaluator_float.aevaluate_dataset(
qa_dataset
)
# int8 embedding type
eval_results_int8 = await retriever_evaluator_int8.aevaluate_dataset(
qa_dataset
)
# binary embedding type
eval_results_binary = await retriever_evaluator_binary.aevaluate_dataset(
qa_dataset
)
# ubinary embedding type
eval_results_ubinary = await retriever_evaluator_ubinary.aevaluate_dataset(
qa_dataset
)
display_results to get the display the results in dataframe with each retriever.import pandas as pd
def display_results(name, eval_results):
"""Display results from evaluate."""
metric_dicts = []
for eval_result in eval_results:
metric_dict = eval_result.metric_vals_dict
metric_dicts.append(metric_dict)
full_df = pd.DataFrame(metric_dicts)
hit_rate = full_df["hit_rate"].mean()
mrr = full_df["mrr"].mean()
columns = {"Embedding Type": [name], "hit_rate": [hit_rate], "mrr": [mrr]}
metric_df = pd.DataFrame(columns)
return metric_df
# metrics for float embedding type
metrics_float = display_results("float", eval_results_float)
# metrics for int8 embedding type
metrics_int8 = display_results("int8", eval_results_int8)
# metrics for binary embedding type
metrics_binary = display_results("binary", eval_results_binary)
# metrics for ubinary embedding type
metrics_ubinary = display_results("ubinary", eval_results_ubinary)
combined_metrics = pd.concat(
[metrics_float, metrics_int8, metrics_binary, metrics_ubinary]
)
combined_metrics.set_index(["Embedding Type"], append=True, inplace=True)
combined_metrics