Back to Mem0

Ollama

docs/components/llms/models/ollama.mdx

2.0.11.9 KB
Original Source

You can use LLMs from Ollama to run Mem0 locally. These models support tool calling.

Usage

<CodeGroup> ```python Python import os from mem0 import Memory

os.environ["OPENAI_API_KEY"] = "your-api-key" # for embedder

config = { "llm": { "provider": "ollama", "config": { "model": "mixtral:8x7b", "temperature": 0.1, "max_tokens": 2000, } } }

m = Memory.from_config(config) messages = [ {"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"}, {"role": "assistant", "content": "How about thriller movies? They can be quite engaging."}, {"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."}, {"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."} ] m.add(messages, user_id="alice", metadata={"category": "movies"})


```typescript TypeScript
import { Memory } from 'mem0ai/oss';

const config = {
  llm: {
    provider: 'ollama',
    config: {
      model: 'llama3.1:8b', // or any other Ollama model
      url: 'http://localhost:11434', // Ollama server URL
      temperature: 0.1,
    },
  },
};

const memory = new Memory(config);
const messages = [
    {"role": "user", "content": "I'm planning to watch a movie tonight. Any recommendations?"},
    {"role": "assistant", "content": "How about thriller movies? They can be quite engaging."},
    {"role": "user", "content": "I'm not a big fan of thriller movies but I love sci-fi movies."},
    {"role": "assistant", "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future."}
]
await memory.add(messages, { userId: "alice", metadata: { category: "movies" } });
</CodeGroup>

Config

All available parameters for the ollama config are present in Master List of All Params in Config.