Back to Llama Index

Qdrant FastEmbed Embeddings

docs/examples/embeddings/fastembed.ipynb

0.14.211.0 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/embeddings/clarifai.ipynb" target="_parent"></a>

Qdrant FastEmbed Embeddings

LlamaIndex supports FastEmbed for embeddings generation.

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index-embeddings-fastembed
python
%pip install llama-index

To use this provider, the fastembed package needs to be installed.

python
%pip install fastembed

The list of supported models can be found here.

python
from llama_index.embeddings.fastembed import FastEmbedEmbedding

embed_model = FastEmbedEmbedding(model_name="BAAI/bge-small-en-v1.5")
python
embeddings = embed_model.get_text_embedding("Some text to embed.")
print(len(embeddings))
print(embeddings[:5])