Back to Chroma

Basic embedding retrieval with Chroma

examples/basic_functionality/start_here.ipynb

1.5.94.9 KB
Original Source

Basic embedding retrieval with Chroma

This notebook demonstrates the most basic use of Chroma to store and retrieve information using embeddings. This core building block is at the heart of many powerful AI applications.

What are embeddings?

Embeddings are the A.I-native way to represent any kind of data, making them the perfect fit for working with all kinds of A.I-powered tools and algorithms. They can represent text, images, and soon audio and video.

To create an embedding, data is fed into an embedding model, which outputs vectors of numbers. The model is trained in such a way that 'similar' data, e.g. text with similar meanings, or images with similar content, will produce vectors which are nearer to one another, than those which are dissimilar.

Embeddings and retrieval

We can use the similarity property of embeddings to search for and retrieve information. For example, we can find documents relevant to a particular topic, or images similar to a given image. Rather than searching for keywords or tags, we can search by finding data with similar semantic meaning.

python
%pip install -Uq chromadb numpy datasets tqdm ipywidgets

Example Dataset

As a demonstration we use the SciQ dataset, available from HuggingFace.

Dataset description, from HuggingFace:

The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in multiple-choice format with 4 answer options each. For the majority of the questions, an additional paragraph with supporting evidence for the correct answer is provided.

In this notebook, we will demonstrate how to retrieve supporting evidence for a given question.

python
# Get the SciQ dataset from HuggingFace
from datasets import load_dataset

dataset = load_dataset("sciq", split="train")

# Filter the dataset to only include questions with a support
dataset = dataset.filter(lambda x: x["support"] != "")

print("Number of questions with support: ", len(dataset))

Loading the data into Chroma

Chroma comes with a built-in embedding model, which makes it simple to load text. We can load the SciQ dataset into Chroma with just a few lines of code.

python
# Import Chroma and instantiate a client. The default Chroma client is ephemeral, meaning it will not save to disk.
import chromadb

client = chromadb.Client()
python
# Create a new Chroma collection to store the supporting evidence. We don't need to specify an embedding fuction, and the default will be used.
collection = client.create_collection("sciq_supports")
python
from tqdm.notebook import tqdm

# Load the supporting evidence in batches of 1000
batch_size = 1000
for i in tqdm(range(0, len(dataset), batch_size), desc="Adding documents"):
    collection.add(
        ids=[
            str(i) for i in range(i, min(i + batch_size, len(dataset)))
        ],  # IDs are just strings
        documents=dataset["support"][i : i + batch_size],
        metadatas=[
            {"type": "support"} for _ in range(i, min(i + batch_size, len(dataset)))
        ],
    )

Querying the data

Once the data is loaded, we can use Chroma to find supporting evidence for the questions in the dataset. In this example, we retrieve the most relevant result according to the embedding similarity score.

Chroma handles computing similarity and finding the most relevant results for you, so you can focus on building your application.

python
results = collection.query(
    query_texts=dataset["question"][:10],
    n_results=1)

we display the query questions along with their retrieved supports

python
# Print the question and the corresponding support
for i, q in enumerate(dataset['question'][:10]):
    print(f"Question: {q}")
    print(f"Retrieved support: {results['documents'][i][0]}")
    print()

What's next?

Check out the Chroma documentation to get started with building your own applications.

The core embeddings based retrieval functionality demonstrated here is at the heart of many powerful AI applications, like using large language models with Chroma to chat with your documents, as well as memory for agents like BabyAgi and Voyager.

Chroma is already integrated with many popular AI applications frameworks, including LangChain and LlamaIndex.

Join our community to learn more and get help with your projects: Discord | Twitter

We are hiring!