Back to Mem0

Ollama

embedchain/notebooks/ollama.ipynb

2.0.11023 B
Original Source

Cookbook for using Ollama with Embedchain

Step-1: Setup Ollama, follow these instructions https://github.com/jmorganca/ollama

Once Setup is done:

  • ollama pull llama2 (All supported models can be found here: https://ollama.ai/library)
  • ollama run llama2 (Test out the model once)
  • ollama serve

Step-2 Create embedchain app and define your config (all local inference)

python
from embedchain import App
app = App.from_config(config={
    "llm": {
        "provider": "ollama",
        "config": {
            "model": "llama2",
            "temperature": 0.5,
            "top_p": 1,
            "stream": True
        }
    },
    "embedder": {
        "provider": "huggingface",
        "config": {
            "model": "BAAI/bge-small-en-v1.5"
        }
    }
})

Step-3: Add data sources to your app

python
app.add("https://www.forbes.com/profile/elon-musk")
python
answer = app.query("who is elon musk?")