docs/examples/embeddings/ollama_embedding.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/embeddings/ollama_embedding.ipynb" target="_parent"></a>
If you're opening this Notebook on colab, you will probably need to install LlamaIndex 🦙.
%pip install llama-index-embeddings-ollama
from llama_index.embeddings.ollama import OllamaEmbedding
ollama_embedding = OllamaEmbedding(
model_name="embeddinggemma",
base_url="http://localhost:11434",
# Can optionally pass additional kwargs to ollama
# ollama_additional_kwargs={"mirostat": 0},
)
You can generate embeddings using one of several methods:
get_text_embedding_batchget_text_embeddingget_query_embeddingAs well as async versions:
aget_text_embedding_batchaget_text_embeddingaget_query_embeddingembeddings = ollama_embedding.get_text_embedding_batch(
["This is a passage!", "This is another passage"], show_progress=True
)
print(f"Got vectors of length {len(embeddings[0])}")
print(embeddings[0][:10])
embedding = ollama_embedding.get_text_embedding(
"This is a piece of text!",
)
print(f"Got vectors of length {len(embedding)}")
print(embedding[:10])
embedding = ollama_embedding.get_query_embedding(
"This is a query!",
)
print(f"Got vectors of length {len(embedding)}")
print(embedding[:10])