docs/mintlify/cloud/search-api/hybrid-search.mdx
import { Callout } from '/snippets/callout.mdx';
<Callout> **Prerequisites:** To use hybrid search with sparse embeddings, you must first configure a sparse vector index in your collection schema. See [Sparse Vector Search Setup](../schema/sparse-vector-search) for configuration instructions. </Callout>Reciprocal Rank Fusion combines multiple rankings by using rank positions rather than raw scores. This makes it effective for merging rankings with different score scales.
RRF combines rankings using the formula:
$$ \text{score} = -\sum_{i} \frac{w_i}{k + r_i} $$
Where:
The score is negative because Chroma uses ascending order (lower scores = better matches).
<Callout> **Important:** The legacy `query` API outputs *distances*, whereas RRF uses *scores* </Callout> <CodeGroup> ```python Python # Example: How RRF calculates scores # Document A: rank 0 in first Knn, rank 2 in second Knn # Document B: rank 1 in first Knn, rank 0 in second Knn
```typescript TypeScript
// Example: How RRF calculates scores
// Document A: rank 0 in first Knn, rank 2 in second Knn
// Document B: rank 1 in first Knn, rank 0 in second Knn
// With equal weights (1.0, 1.0) and k=60:
// Document A score = -(1.0/(60+0) + 1.0/(60+2)) = -(0.0167 + 0.0161) = -0.0328
// Document B score = -(1.0/(60+1) + 1.0/(60+0)) = -(0.0164 + 0.0167) = -0.0331
// Document A ranks higher (smaller negative score)
| Parameter | Type | Default | Description |
|---|---|---|---|
ranks | List[Rank] | Required | List of ranking expressions (must have return_rank=True) |
k | int | 60 | Smoothing parameter - higher values reduce emphasis on top ranks |
weights | List[float] or None | None | Weights for each ranking (defaults to 1.0 for each) |
normalize | bool | False | If True, normalize weights to sum to 1.0 |
| Approach | Use Case | Pros | Cons |
|---|---|---|---|
| RRF | Different score scales (e.g., dense + sparse) | Scale-agnostic, robust to outliers | Requires return_rank=True |
| Linear Combination | Same score scales | Simple, preserves distances | Sensitive to scale differences |
linear = Knn(query="machine learning") * 0.7 + Knn(query="deep learning") * 0.3
```typescript TypeScript
// RRF - works well with different scales
const rrf = Rrf({
ranks: [
Knn({ query: "machine learning", returnRank: true }), // Dense embeddings
Knn({ query: "machine learning", key: "sparse_embedding", returnRank: true }) // Sparse embeddings
]
});
// Linear combination - better when scales are similar
const linear = Knn({ query: "machine learning" }).multiply(0.7)
.add(Knn({ query: "deep learning" }).multiply(0.3));
use chroma::types::{rrf, Key, QueryVector, RankExpr};
let dense = RankExpr::Knn {
query: QueryVector::Dense(vec![0.1, 0.2, 0.3]),
key: Key::Embedding,
limit: 100,
default: None,
return_rank: true,
};
let sparse = RankExpr::Knn {
query: QueryVector::Dense(vec![0.1, 0.2, 0.3]),
key: Key::field("sparse_embedding"),
limit: 100,
default: None,
return_rank: true,
};
let rrf_rank = rrf(vec![dense, sparse], Some(60), None, false)?;
RRF requires rank positions (0, 1, 2...) not distance scores. Always set return_rank=True on all Knn expressions used in RRF.
rrf = Rrf([ Knn(query="artificial intelligence"), # Returns: 0.23, 0.45, 0.67... (distances) Knn(query="artificial intelligence", key="sparse_embedding") ])
```typescript TypeScript
// CORRECT - returns rank positions
const rrf1 = Rrf({
ranks: [
Knn({ query: "artificial intelligence", returnRank: true }), // Returns: 0, 1, 2, 3...
Knn({ query: "artificial intelligence", key: "sparse_embedding", returnRank: true })
]
});
// INCORRECT - returns distances
const rrf2 = Rrf({
ranks: [
Knn({ query: "artificial intelligence" }), // Returns: 0.23, 0.45, 0.67... (distances)
Knn({ query: "artificial intelligence", key: "sparse_embedding" })
]
});
// This will produce incorrect results!
rrf = Rrf( ranks=[ Knn(query="neural networks", return_rank=True), Knn(query="neural networks", key="sparse_embedding", return_rank=True) ], weights=[3.0, 1.0] # Dense 3x more important than sparse )
rrf = Rrf( ranks=[ Knn(query="neural networks", return_rank=True), Knn(query="neural networks", key="sparse_embedding", return_rank=True) ], weights=[75, 25], # Will be normalized to [0.75, 0.25] normalize=True )
```typescript TypeScript
// Equal weights (default) - each ranking equally important
const rrf1 = Rrf({
ranks: [
Knn({ query: "neural networks", returnRank: true }),
Knn({ query: "neural networks", key: "sparse_embedding", returnRank: true })
]
}); // Implicit weights: [1.0, 1.0]
// Custom weights - adjust relative importance
const rrf2 = Rrf({
ranks: [
Knn({ query: "neural networks", returnRank: true }),
Knn({ query: "neural networks", key: "sparse_embedding", returnRank: true })
],
weights: [3.0, 1.0] // Dense 3x more important than sparse
});
// Normalized weights - ensures weights sum to 1.0
const rrf3 = Rrf({
ranks: [
Knn({ query: "neural networks", returnRank: true }),
Knn({ query: "neural networks", key: "sparse_embedding", returnRank: true })
],
weights: [75, 25], // Will be normalized to [0.75, 0.25]
normalize: true
});
The k parameter controls how much emphasis is placed on top-ranked results:
rrf = Rrf(ranks=[...], k=60)
rrf = Rrf(ranks=[...], k=200)
```typescript TypeScript
// Small k - top results heavily weighted
const rrf1 = Rrf({ ranks: [...], k: 10 });
// Rank 0 gets weight/(10+0) = weight/10
// Rank 10 gets weight/(10+10) = weight/20 (half as important)
// Default k - balanced
const rrf2 = Rrf({ ranks: [...], k: 60 });
// Rank 0 gets weight/(60+0) = weight/60
// Rank 10 gets weight/(60+10) = weight/70 (still significant)
// Large k - more uniform
const rrf3 = Rrf({ ranks: [...], k: 200 });
// Rank 0 gets weight/(200+0) = weight/200
// Rank 10 gets weight/(200+10) = weight/210 (almost equal importance)
The most common RRF use case is combining dense semantic embeddings with sparse keyword embeddings.
<CodeGroup> ```python Python from chromadb import Search, K, Knn, Rrfdense_rank = Knn( query="machine learning research", # Text query for dense embeddings key="#embedding", # Default embedding field return_rank=True, limit=200 # Consider top 200 candidates )
sparse_rank = Knn( query="machine learning research", # Text query for sparse embeddings key="sparse_embedding", # Metadata field for sparse vectors return_rank=True, limit=200 )
hybrid_rank = Rrf( ranks=[dense_rank, sparse_rank], weights=[0.7, 0.3], # 70% semantic, 30% keyword k=60 )
search = (Search() .where(K("status") == "published") # Optional filtering .rank(hybrid_rank) .limit(20) .select(K.DOCUMENT, K.SCORE, "title") )
results = collection.search(search)
```typescript TypeScript
import { Search, K, Knn, Rrf } from 'chromadb';
// Dense semantic embeddings
const denseRank = Knn({
query: "machine learning research", // Text query for dense embeddings
key: "#embedding", // Default embedding field
returnRank: true,
limit: 200 // Consider top 200 candidates
});
// Sparse keyword embeddings
const sparseRank = Knn({
query: "machine learning research", // Text query for sparse embeddings
key: "sparse_embedding", // Metadata field for sparse vectors
returnRank: true,
limit: 200
});
// Combine with RRF
const hybridRank = Rrf({
ranks: [denseRank, sparseRank],
weights: [0.7, 0.3], // 70% semantic, 30% keyword
k: 60
});
// Use in search
const search = new Search()
.where(K("status").eq("published")) // Optional filtering
.rank(hybridRank)
.limit(20)
.select(K.DOCUMENT, K.SCORE, "title");
const results = await collection.search(search);
Each Knn component in RRF operates on the documents that pass the filter. The number of results from each component is the minimum of its limit parameter and the number of filtered documents. RRF handles varying result counts gracefully - documents from any ranking are scored.
// Each Knn operates on filtered documents
// Results per Knn = min(limit, number of documents passing filter)
const rrf = Rrf({
ranks: [
Knn({ query: "quantum computing", returnRank: true, limit: 100 }),
Knn({ query: "quantum computing", key: "sparse_embedding", returnRank: true, limit: 100 })
]
});
return_rank=TrueDocuments must appear in at least one component ranking to be scored. To include documents that don't appear in a specific Knn's results, set the default parameter on that Knn:
rrf = Rrf([ Knn(query="deep learning", return_rank=True, limit=100, default=1000), Knn(query="deep learning", key="sparse_embedding", return_rank=True, limit=100, default=1000) ])
```typescript TypeScript
// Without default: only documents in BOTH rankings are scored
const rrf1 = Rrf({
ranks: [
Knn({ query: "deep learning", returnRank: true, limit: 100 }),
Knn({ query: "deep learning", key: "sparse_embedding", returnRank: true, limit: 100 })
]
});
// With default: documents in EITHER ranking can be scored
const rrf2 = Rrf({
ranks: [
Knn({ query: "deep learning", returnRank: true, limit: 100, default: 1000 }),
Knn({ query: "deep learning", key: "sparse_embedding", returnRank: true, limit: 100, default: 1000 })
]
});
// Documents missing from one ranking get default rank of 1000
Rrf is a convenience class that constructs the underlying ranking expression. You can manually build the same expression if needed:
manual_rrf = -0.7 / (60 + rank1) - 0.3 / (60 + rank2)
```typescript TypeScript
// Using Rrf wrapper (recommended)
const rrf = Rrf({
ranks: [rank1, rank2],
weights: [0.7, 0.3],
k: 60
});
// Manual construction (equivalent)
// RRF formula: -sum(weight_i / (k + rank_i))
const manualRrf = Val(-0.7).divide(Val(60).add(rank1))
.subtract(Val(0.3).divide(Val(60).add(rank2)));
// Both produce the same ranking expression
Here's a practical example showing RRF with filtering and result processing:
<CodeGroup> ```python Python from chromadb import Search, K, Knn, Rrfhybrid_rank = Rrf( ranks=[ Knn(query="machine learning applications", return_rank=True, limit=300), Knn(query="machine learning applications", key="sparse_embedding", return_rank=True, limit=300) ], weights=[2.0, 1.0], # Dense 2x more important k=60 )
search = (Search() .where( (K("language") == "en") & (K("year") >= 2020) ) .rank(hybrid_rank) .limit(10) .select(K.DOCUMENT, K.SCORE, "title", "year") )
results = collection.search(search) rows = results.rows()[0] # Get first (and only) search results
for i, row in enumerate(rows, 1): print(f"{i}. {row['metadata']['title']} ({row['metadata']['year']})") print(f" RRF Score: {row['score']:.4f}") print(f" Preview: {row['document'][:100]}...") print()
```typescript TypeScript
import { Search, K, Knn, Rrf } from 'chromadb';
// Create RRF ranking with text query
const hybridRank = Rrf({
ranks: [
Knn({ query: "machine learning applications", returnRank: true, limit: 300 }),
Knn({ query: "machine learning applications", key: "sparse_embedding", returnRank: true, limit: 300 })
],
weights: [2.0, 1.0], // Dense 2x more important
k: 60
});
// Build complete search
const search = new Search()
.where(
K("language").eq("en")
.and(K("year").gte(2020))
)
.rank(hybridRank)
.limit(10)
.select(K.DOCUMENT, K.SCORE, "title", "year");
// Execute and process results
const results = await collection.search(search);
const rows = results.rows()[0]; // Get first (and only) search results
for (const [i, row] of rows.entries()) {
console.log(`${i+1}. ${row.metadata?.title} (${row.metadata?.year})`);
console.log(` RRF Score: ${row.score?.toFixed(4)}`);
console.log(` Preview: ${row.document?.substring(0, 100)}...`);
console.log();
}
Example output:
1. Introduction to Neural Networks (2023)
RRF Score: -0.0428
Preview: Neural networks are computational models inspired by biological neural networks...
2. Deep Learning Fundamentals (2022)
RRF Score: -0.0385
Preview: This comprehensive guide covers the fundamental concepts of deep learning...
return_rank=True for all Knn expressions in RRFdefault values in Knn if you want documents from partial matches