Back to Llama Index

Ollama Embeddings

docs/examples/embeddings/ollama_embedding.ipynb

0.14.211.5 KB
Original Source

<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/embeddings/ollama_embedding.ipynb" target="_parent"></a>

Ollama Embeddings

If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.

python
%pip install llama-index-embeddings-ollama
python
from llama_index.embeddings.ollama import OllamaEmbedding

ollama_embedding = OllamaEmbedding(
    model_name="embeddinggemma",
    base_url="http://localhost:11434",
    # Can optionally pass additional kwargs to ollama
    # ollama_additional_kwargs={"mirostat": 0},
)

You can generate embeddings using one of several methods:

  • get_text_embedding_batch
  • get_text_embedding
  • get_query_embedding

As well as async versions:

  • aget_text_embedding_batch
  • aget_text_embedding
  • aget_query_embedding
python
embeddings = ollama_embedding.get_text_embedding_batch(
    ["This is a passage!", "This is another passage"], show_progress=True
)
print(f"Got vectors of length {len(embeddings[0])}")
print(embeddings[0][:10])
python
embedding = ollama_embedding.get_text_embedding(
    "This is a piece of text!",
)
print(f"Got vectors of length {len(embedding)}")
print(embedding[:10])
python
embedding = ollama_embedding.get_query_embedding(
    "This is a query!",
)
print(f"Got vectors of length {len(embedding)}")
print(embedding[:10])