docs/examples/agent/memory/vector_memory.ipynb
<a href="https://colab.research.google.com/github/run-llama/llama_index/blob/main/docs/examples/agent/memory/vector_memory.ipynb" target="_parent"></a>
NOTE: This example of memory is deprecated in favor of the newer and more flexible Memory class. See the latest docs.
The vector memory module uses vector search (backed by a vector db) to retrieve relevant conversation items given a user input.
This notebook shows you how to use the VectorMemory class. We show you how to use its individual functions. A typical usecase for vector memory is as a long-term memory storage of chat messages. You can
Here we initialize a raw memory module and demonstrate its functions - to put and retrieve from ChatMessage objects.
retriever_kwargs is the same args you'd specify on the VectorIndexRetriever or from index.as_retriever(..).from llama_index.core.memory import VectorMemory
from llama_index.embeddings.openai import OpenAIEmbedding
vector_memory = VectorMemory.from_defaults(
vector_store=None, # leave as None to use default in-memory vector store
embed_model=OpenAIEmbedding(),
retriever_kwargs={"similarity_top_k": 1},
)
from llama_index.core.llms import ChatMessage
msgs = [
ChatMessage.from_str("Jerry likes juice.", "user"),
ChatMessage.from_str("Bob likes burgers.", "user"),
ChatMessage.from_str("Alice likes apples.", "user"),
]
# load into memory
for m in msgs:
vector_memory.put(m)
# retrieve from memory
msgs = vector_memory.get("What does Jerry like?")
msgs
vector_memory.reset()
Now let's try resetting and trying again. This time, we'll add an assistant message. Note that user/assistant messages are bundled by default.
msgs = [
ChatMessage.from_str("Jerry likes burgers.", "user"),
ChatMessage.from_str("Bob likes apples.", "user"),
ChatMessage.from_str("Indeed, Bob likes apples.", "assistant"),
ChatMessage.from_str("Alice likes juice.", "user"),
]
vector_memory.set(msgs)
msgs = vector_memory.get("What does Bob like?")
msgs