Back to Chroma

Generate Custom Benchmark

sample_apps/generative_benchmarking/generate_benchmark.ipynb

1.5.98.5 KB
Original Source

Generate Custom Benchmark

This notebook walks through how to generate a custom benchmark based on your data.

We will be using OpenAI for our embedding model and LLM, but this can easily be switched out:

  • Various embedding functions are provided in embedding_functions.py
  • LLM prompts are provided in llm_functions.py

NOTE: When switching out embedding models, you will need to make a new collection for your new embeddings. Then, embed the same documents and queries with the embedding model of your choice.

Use the same golden dataset of queries when comparing embedding models on the same data.

Cells that should be modified when switching out embedding models are labeled as [Modify]

1. Setup

1.1 Install & Import

Install the necessary packages.

python
%pip install -r requirements.txt

Import modules.

python
%load_ext autoreload
%autoreload 2

import chromadb
import pandas as pd
import numpy as np
import json
import os
import dotenv
from pathlib import Path
from datetime import datetime
from openai import OpenAI as OpenAIClient
from anthropic import Anthropic as AnthropicClient
from functions.llm import *
from functions.embed import *
from functions.chroma import *
from functions.evaluate import *
from functions.visualize import *
from functions.types import *

dotenv.load_dotenv()

1.2 Set Variables

We use pre-chunked Chroma Docs as an example. To run this notebook with your own data, uncomment the commented out lines and fill in.

[Modify] COLLECTION_NAME when you change your embedding model

python
with open('data/chroma_docs.json', 'r') as f:
    corpus = json.load(f)

context = "This is a technical support bot for Chroma, a vector database company often used by developers for building AI applications."
example_queries = """
    how to add to a collection
    filter by metadata
    retrieve embeddings when querying
    how to use openai embedding function when adding to collection
    """

COLLECTION_NAME = "chroma-docs-openai-large" # change this collection name whenever you switch embedding models

# Generate a Benchmark with your own data:

# with open('filepath/to/your/data.json', 'r') as f:
#     corpus = json.load(f)

# context = "FILL IN WITH CONTEXT RELEVANT TO YOUR USE CASE"
# example_queries = "FILL IN WITH EXAMPLE QUERIES"

# COLLECTION_NAME = "YOUR COLLECTION NAME"

1.2 Load API Keys

To use Chroma Cloud, you can sign up for a Chroma Cloud account here and create a new database.

python
# Embedding Model & LLM
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

# If you want to use Chroma Cloud, uncomment and fill in the following:
# CHROMA_TENANT = "YOUR CHROMA TENANT ID"
# X_CHROMA_TOKEN = "YOUR CHROMA API KEY"
# DATABASE_NAME = "YOUR CHROMA DATABASE NAME"

1.3 Set Clients

Initialize the clients.

python
chroma_client = chromadb.Client()

# If you want to use Chroma Cloud, uncomment the following line:
# chroma_client = chromadb.HttpClient(
#   ssl=True,
#   host='api.trychroma.com',
#   tenant=CHROMA_TENANT,
#   database=DATABASE_NAME,
#   headers={
#     'x-chroma-token': X_CHROMA_TOKEN
#   }
# )

openai_client = OpenAIClient(api_key=OPENAI_API_KEY)

2. Create Chroma Collection

If you already have a Chroma Collection for your data, skip to 2.3.

2.1 Load in Your Data

python
corpus_ids = list(corpus.keys())
corpus_documents = [corpus[key] for key in corpus_ids]

2.2 Embed Data & Add to Chroma Collection

Embed your documents using an embedding model of your choice. We use Openai's text-embedding-3-large here, but have other functions available in embed.py. You may also define your own embedding function.

We use batching and multi-threading for efficiency.

[Modify] embedding function (openai_embed_in_batches) to the embedding model you wish to use

python
corpus_embeddings = openai_embed_in_batches(
    openai_client=openai_client,
    texts=corpus_documents,
    model="text-embedding-3-large",
)

corpus_collection = chroma_client.get_or_create_collection(
    name=COLLECTION_NAME,
    metadata={"hnsw:space": "cosine"}
)

collection_add_in_batches(
    collection=corpus_collection,
    ids=corpus_ids,
    texts=corpus_documents,
    embeddings=corpus_embeddings,
)

corpus = {
    id: {
        'document': document,
        'embedding': embedding
    }
    for id, document, embedding in zip(corpus_ids, corpus_documents, corpus_embeddings)
}
python
corpus_collection = chroma_client.get_collection(
    name=COLLECTION_NAME
)

corpus = get_collection_items(
    collection=corpus_collection
)

corpus_ids = [key for key in corpus.keys()]
corpus_documents = [corpus[key]['document'] for key in corpus_ids]

3. Filter Documents for Quality

We begin by filtering our documents prior to query generation, this step ensures that we avoid generating queries from irrelevant or incomplete documents.

3.1 Set Criteria

We use the following criteria:

  • relevance checks whether the document is relevant to the specified context
  • completeness checks for overall quality of the document

You can modify the criteria as you see fit.

python
relevance = f"The document is relevant to the following context: {context}"
completeness = "The document is complete, meaning that it contains useful information to answer queries and does not only serve as an introduction to the main content that users may be looking for."

criteria = [relevance, completeness]
criteria_labels = ["relevance", "completeness"]

3.2 Filter Documents

We filter our documents using gpt-4o-mini. Batching functions are also available in llm.py.

python
filtered_document_ids = filter_documents(
    client=openai_client,
    model="gpt-4o-mini",
    documents=corpus_documents,
    ids=corpus_ids,
    criteria=criteria,
    criteria_labels=criteria_labels
)
python
passed_documents = [corpus[id]['document'] for id in filtered_document_ids]

failed_document_ids = [id for id in corpus_ids if id not in filtered_document_ids]

3.3 View Results

python
print(f"Number of documents passed: {len(filtered_document_ids)}")
print(f"Number of documents failed: {len(failed_document_ids)}")
print("-"*80)
print("Example of passed document:")
print(corpus[filtered_document_ids[0]]['document'])
print("-"*80)
print("Example of failed document:")
print(corpus[failed_document_ids[0]]['document'])
print("-"*80)

4. Generate Golden Dataset

Using our filtered documents, we can genereate a golden dataset of queries.

4.1 Create Custom Prompt

We will use context and example_queries for query generation.

4.2 Generate Queries

Generate queries with gpt-4o. Batching functions are available in llm.py.

python
golden_dataset = create_golden_dataset(
    client=openai_client,
    model="gpt-4o",
    documents=passed_documents,
    ids=filtered_document_ids,
    context=context,
    example_queries=example_queries
)

golden_dataset.head()

5. Evaluate

Now that we have our golden dataset, we will can run our evaluation.

5.1 Prepare Inputs

python
queries = golden_dataset['query'].tolist()
ids = golden_dataset['id'].tolist()

Embed generated queries.

[Modify] embedding function (openai_embed_in_batches) to the embedding model you wish to use

python
query_embeddings = openai_embed_in_batches(
    openai_client=openai_client,
    texts=queries,
    model="text-embedding-3-large"
)

query_embeddings_lookup_dict = {
    id: QueryItem(
        text=query,
        embedding=embedding
    )
    for id, query, embedding in zip(ids, queries, query_embeddings)
}

query_embeddings_lookup = QueryLookup(lookup=query_embeddings_lookup_dict)

Create our qrels (query relevance labels) dataframe. In this case, each query and its corresponding document share the same id.

python
qrels = pd.DataFrame(
    {
        "query-id": ids,
        "corpus-id": ids,
        "score": 1
    }
)

5.2 Run Benchmark

python
results = run_benchmark(
    query_embeddings_lookup=query_embeddings_lookup,
    collection=corpus_collection,
    qrels=qrels
)

Save results.

This is helpful for comparison (e.g. comparing different embedding models).

[Modify] "model" to the model you are using

python
timestamp = datetime.now().strftime("%Y-%m-%d--%H-%M-%S")
results_to_save = {
    "model": "text-embedding-3-large",
    "results": results
}
python
results_dir = Path("results")

with open(os.path.join(results_dir, f'{timestamp}.json'), 'w') as f:
    json.dump(results_to_save, f)