sample_apps/generative_benchmarking/generate_benchmark.ipynb
This notebook walks through how to generate a custom benchmark based on your data.
We will be using OpenAI for our embedding model and LLM, but this can easily be switched out:
embedding_functions.pyllm_functions.pyNOTE: When switching out embedding models, you will need to make a new collection for your new embeddings. Then, embed the same documents and queries with the embedding model of your choice.
Use the same golden dataset of queries when comparing embedding models on the same data.
Cells that should be modified when switching out embedding models are labeled as [Modify]
Install the necessary packages.
%pip install -r requirements.txt
Import modules.
%load_ext autoreload
%autoreload 2
import chromadb
import pandas as pd
import numpy as np
import json
import os
import dotenv
from pathlib import Path
from datetime import datetime
from openai import OpenAI as OpenAIClient
from anthropic import Anthropic as AnthropicClient
from functions.llm import *
from functions.embed import *
from functions.chroma import *
from functions.evaluate import *
from functions.visualize import *
from functions.types import *
dotenv.load_dotenv()
We use pre-chunked Chroma Docs as an example. To run this notebook with your own data, uncomment the commented out lines and fill in.
[Modify] COLLECTION_NAME when you change your embedding model
with open('data/chroma_docs.json', 'r') as f:
corpus = json.load(f)
context = "This is a technical support bot for Chroma, a vector database company often used by developers for building AI applications."
example_queries = """
how to add to a collection
filter by metadata
retrieve embeddings when querying
how to use openai embedding function when adding to collection
"""
COLLECTION_NAME = "chroma-docs-openai-large" # change this collection name whenever you switch embedding models
# Generate a Benchmark with your own data:
# with open('filepath/to/your/data.json', 'r') as f:
# corpus = json.load(f)
# context = "FILL IN WITH CONTEXT RELEVANT TO YOUR USE CASE"
# example_queries = "FILL IN WITH EXAMPLE QUERIES"
# COLLECTION_NAME = "YOUR COLLECTION NAME"
To use Chroma Cloud, you can sign up for a Chroma Cloud account here and create a new database.
# Embedding Model & LLM
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# If you want to use Chroma Cloud, uncomment and fill in the following:
# CHROMA_TENANT = "YOUR CHROMA TENANT ID"
# X_CHROMA_TOKEN = "YOUR CHROMA API KEY"
# DATABASE_NAME = "YOUR CHROMA DATABASE NAME"
Initialize the clients.
chroma_client = chromadb.Client()
# If you want to use Chroma Cloud, uncomment the following line:
# chroma_client = chromadb.HttpClient(
# ssl=True,
# host='api.trychroma.com',
# tenant=CHROMA_TENANT,
# database=DATABASE_NAME,
# headers={
# 'x-chroma-token': X_CHROMA_TOKEN
# }
# )
openai_client = OpenAIClient(api_key=OPENAI_API_KEY)
If you already have a Chroma Collection for your data, skip to 2.3.
corpus_ids = list(corpus.keys())
corpus_documents = [corpus[key] for key in corpus_ids]
Embed your documents using an embedding model of your choice. We use Openai's text-embedding-3-large here, but have other functions available in embed.py. You may also define your own embedding function.
We use batching and multi-threading for efficiency.
[Modify] embedding function (openai_embed_in_batches) to the embedding model you wish to use
corpus_embeddings = openai_embed_in_batches(
openai_client=openai_client,
texts=corpus_documents,
model="text-embedding-3-large",
)
corpus_collection = chroma_client.get_or_create_collection(
name=COLLECTION_NAME,
metadata={"hnsw:space": "cosine"}
)
collection_add_in_batches(
collection=corpus_collection,
ids=corpus_ids,
texts=corpus_documents,
embeddings=corpus_embeddings,
)
corpus = {
id: {
'document': document,
'embedding': embedding
}
for id, document, embedding in zip(corpus_ids, corpus_documents, corpus_embeddings)
}
corpus_collection = chroma_client.get_collection(
name=COLLECTION_NAME
)
corpus = get_collection_items(
collection=corpus_collection
)
corpus_ids = [key for key in corpus.keys()]
corpus_documents = [corpus[key]['document'] for key in corpus_ids]
We begin by filtering our documents prior to query generation, this step ensures that we avoid generating queries from irrelevant or incomplete documents.
We use the following criteria:
relevance checks whether the document is relevant to the specified contextcompleteness checks for overall quality of the documentYou can modify the criteria as you see fit.
relevance = f"The document is relevant to the following context: {context}"
completeness = "The document is complete, meaning that it contains useful information to answer queries and does not only serve as an introduction to the main content that users may be looking for."
criteria = [relevance, completeness]
criteria_labels = ["relevance", "completeness"]
We filter our documents using gpt-4o-mini. Batching functions are also available in llm.py.
filtered_document_ids = filter_documents(
client=openai_client,
model="gpt-4o-mini",
documents=corpus_documents,
ids=corpus_ids,
criteria=criteria,
criteria_labels=criteria_labels
)
passed_documents = [corpus[id]['document'] for id in filtered_document_ids]
failed_document_ids = [id for id in corpus_ids if id not in filtered_document_ids]
print(f"Number of documents passed: {len(filtered_document_ids)}")
print(f"Number of documents failed: {len(failed_document_ids)}")
print("-"*80)
print("Example of passed document:")
print(corpus[filtered_document_ids[0]]['document'])
print("-"*80)
print("Example of failed document:")
print(corpus[failed_document_ids[0]]['document'])
print("-"*80)
Using our filtered documents, we can genereate a golden dataset of queries.
We will use context and example_queries for query generation.
Generate queries with gpt-4o. Batching functions are available in llm.py.
golden_dataset = create_golden_dataset(
client=openai_client,
model="gpt-4o",
documents=passed_documents,
ids=filtered_document_ids,
context=context,
example_queries=example_queries
)
golden_dataset.head()
Now that we have our golden dataset, we will can run our evaluation.
queries = golden_dataset['query'].tolist()
ids = golden_dataset['id'].tolist()
Embed generated queries.
[Modify] embedding function (openai_embed_in_batches) to the embedding model you wish to use
query_embeddings = openai_embed_in_batches(
openai_client=openai_client,
texts=queries,
model="text-embedding-3-large"
)
query_embeddings_lookup_dict = {
id: QueryItem(
text=query,
embedding=embedding
)
for id, query, embedding in zip(ids, queries, query_embeddings)
}
query_embeddings_lookup = QueryLookup(lookup=query_embeddings_lookup_dict)
Create our qrels (query relevance labels) dataframe. In this case, each query and its corresponding document share the same id.
qrels = pd.DataFrame(
{
"query-id": ids,
"corpus-id": ids,
"score": 1
}
)
results = run_benchmark(
query_embeddings_lookup=query_embeddings_lookup,
collection=corpus_collection,
qrels=qrels
)
Save results.
This is helpful for comparison (e.g. comparing different embedding models).
[Modify] "model" to the model you are using
timestamp = datetime.now().strftime("%Y-%m-%d--%H-%M-%S")
results_to_save = {
"model": "text-embedding-3-large",
"results": results
}
results_dir = Path("results")
with open(os.path.join(results_dir, f'{timestamp}.json'), 'w') as f:
json.dump(results_to_save, f)