Back to Mem0

Neptune as Graph Memory

examples/graph-db-demo/neptune-example.ipynb

2.0.19.5 KB
Original Source

Neptune as Graph Memory

In this notebook, we will be connecting using a Amazon Neptune Analytics instance as our memory graph storage for Mem0.

The Graph Memory storage persists memories in a graph or relationship form when performing m.add memory operations. It then uses vector distance algorithms to find related memories during a m.search operation. Relationships are returned in the result, and add context to the memories.

Reference: Vector Similarity using Neptune Analytics

Prerequisites

1. Install Mem0 with Graph Memory support

To use Mem0 with Graph Memory support (as well as other Amazon services), use pip install:

bash
pip install "mem0ai[graph,extras]"

This command installs Mem0 along with the necessary dependencies for graph functionality (graph) and other Amazon dependencies (extras).

2. Connect to Amazon services

For this sample notebook, configure mem0ai with Amazon Neptune Analytics as the vector and graph store, and Amazon Bedrock for generating embeddings.

Use the following guide for setup details: Setup AWS Bedrock, AOSS, and Neptune

The Neptune Analytics instance must be created using the same vector dimensions as the embedding model creates. See: https://docs.aws.amazon.com/neptune-analytics/latest/userguide/vector-index.html

Your configuration should look similar to:

python
config = {
    "embedder": {
        "provider": "aws_bedrock",
        "config": {
            "model": "amazon.titan-embed-text-v2:0",
            "embedding_dims": 1024
        }
    },
    "llm": {
        "provider": "aws_bedrock",
        "config": {
            "model": "us.anthropic.claude-3-7-sonnet-20250219-v1:0",
            "temperature": 0.1,
            "max_tokens": 2000
        }
    },
    "vector_store": {
        "provider": "neptune",
        "config": {
            "endpoint": f"neptune-graph://my-graph-identifier",
        },
    },
    "graph_store": {
        "provider": "neptune",
        "config": {
            "endpoint": f"neptune-graph://my-graph-identifier",
        },
    },
}

Setup

Import all packages and setup logging

python
from mem0 import Memory
import os
import logging
import sys
from dotenv import load_dotenv

load_dotenv()

logging.getLogger("mem0.graphs.neptune.main").setLevel(logging.INFO)
logging.getLogger("mem0.graphs.neptune.base").setLevel(logging.INFO)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)

logging.basicConfig(
    format="%(levelname)s - %(message)s",
    datefmt="%Y-%m-%d %H:%M:%S",
    stream=sys.stdout,  # Explicitly set output to stdout
)

Setup the Mem0 configuration using:

  • Amazon Bedrock as the embedder
  • Amazon Neptune Analytics instance as a vector / graph store
python
bedrock_embedder_model = "amazon.titan-embed-text-v2:0"
bedrock_llm_model = "us.anthropic.claude-3-7-sonnet-20250219-v1:0"
embedding_model_dims = 1024

graph_identifier = os.environ.get("GRAPH_ID")

config = {
    "embedder": {
        "provider": "aws_bedrock",
        "config": {
            "model": bedrock_embedder_model,
            "embedding_dims": embedding_model_dims
        }
    },
    "llm": {
        "provider": "aws_bedrock",
        "config": {
            "model": bedrock_llm_model,
            "temperature": 0.1,
            "max_tokens": 2000
        }
    },
    "vector_store": {
        "provider": "neptune",
        "config": {
            "endpoint": f"neptune-graph://{graph_identifier}",
        },
    },
    "graph_store": {
        "provider": "neptune",
        "config": {
            "endpoint": f"neptune-graph://{graph_identifier}",
        },
    },
}

Graph Memory initializiation

Initialize Memgraph as a Graph Memory store:

python
m = Memory.from_config(config_dict=config)

app_id = "movies"
user_id = "alice"

m.delete_all(user_id=user_id)

Store memories

Create memories and store one at a time:

python
messages = [
    {
        "role": "user",
        "content": "I'm planning to watch a movie tonight. Any recommendations?",
    },
]

# Store inferred memories (default behavior)
result = m.add(messages, user_id=user_id, metadata={"category": "movie_recommendations"})

all_results = m.get_all(user_id=user_id)
for n in all_results["results"]:
    print(f"node \"{n['memory']}\": [hash: {n['hash']}]")

for e in all_results["relations"]:
    print(f"edge \"{e['source']}\" --{e['relationship']}--> \"{e['target']}\"")

Graph Explorer Visualization

You can visualize the graph using a Graph Explorer connection to Neptune Analytics in Neptune Notebooks in the Amazon console. See Using Amazon Neptune with graph notebooks for instructions on how to setup a Neptune Notebook with Graph Explorer.

Once the graph has been generated, you can open the visualization in the Neptune > Notebooks and click on Actions > Open Graph Explorer. This will automatically connect to your neptune analytics graph that was provided in the notebook setup.

Once in Graph Explorer, visit Open Connections and send all the available nodes and edges to Explorer. Visit Open Graph Explorer to see the nodes and edges in the graph.

Graph Explorer Visualization Example

Note that the visualization given below represents only a single example of the possible results generated by the LLM.

Visualization for the relationship:

"alice" --plans_to_watch--> "movie"

python
messages = [
    {
        "role": "assistant",
        "content": "How about a thriller movies? They can be quite engaging.",
    },
]

# Store inferred memories (default behavior)
result = m.add(messages, user_id=user_id, metadata={"category": "movie_recommendations"})

all_results = m.get_all(user_id=user_id)
for n in all_results["results"]:
    print(f"node \"{n['memory']}\": [hash: {n['hash']}]")

for e in all_results["relations"]:
    print(f"edge \"{e['source']}\" --{e['relationship']}--> \"{e['target']}\"")

Graph Explorer Visualization Example

Note that the visualization given below represents only a single example of the possible results generated by the LLM.

Visualization for the relationship:

"alice" --plans_to_watch--> "movie"
"thriller" --type_of--> "movie"
"movie" --can_be--> "engaging"

python
messages = [
    {
        "role": "user",
        "content": "I'm not a big fan of thriller movies but I love sci-fi movies.",
    },
]

# Store inferred memories (default behavior)
result = m.add(messages, user_id=user_id, metadata={"category": "movie_recommendations"})

all_results = m.get_all(user_id=user_id)
for n in all_results["results"]:
    print(f"node \"{n['memory']}\": [hash: {n['hash']}]")

for e in all_results["relations"]:
    print(f"edge \"{e['source']}\" --{e['relationship']}--> \"{e['target']}\"")

Graph Explorer Visualization Example

Note that the visualization given below represents only a single example of the possible results generated by the LLM.

Visualization for the relationship:

"alice" --dislikes--> "thriller_movies"
"alice" --loves--> "sci-fi_movies"
"alice" --plans_to_watch--> "movie"
"thriller" --type_of--> "movie"
"movie" --can_be--> "engaging"

python
messages = [
    {
        "role": "assistant",
        "content": "Got it! I'll avoid thriller recommendations and suggest sci-fi movies in the future.",
    },
]

# Store inferred memories (default behavior)
result = m.add(messages, user_id=user_id, metadata={"category": "movie_recommendations"})

all_results = m.get_all(user_id=user_id)
for n in all_results["results"]:
    print(f"node \"{n['memory']}\": [hash: {n['hash']}]")

for e in all_results["relations"]:
    print(f"edge \"{e['source']}\" --{e['relationship']}--> \"{e['target']}\"")

Graph Explorer Visualization Example

Note that the visualization given below represents only a single example of the possible results generated by the LLM.

Visualization for the relationship:

"alice" --recommends--> "sci-fi"
"alice" --dislikes--> "thriller_movies"
"alice" --loves--> "sci-fi_movies"
"alice" --plans_to_watch--> "movie"
"alice" --avoids--> "thriller"
"thriller" --type_of--> "movie"
"movie" --can_be--> "engaging"
"sci-fi" --type_of--> "movie"

Search memories

Search all memories for "what does alice love?". Since "alice" the user, this will search for a relationship that fits the users love of "sci-fi" movies and dislike of "thriller" movies.

python
search_results = m.search("what does alice love?", user_id=user_id)
for result in search_results["results"]:
    print(f"\"{result['memory']}\" [score: {result['score']}]")
for relation in search_results["relations"]:
    print(f"{relation}")
python
m.delete_all(user_id)
m.reset()

Conclusion

In this example we demonstrated how an AWS tech stack can be used to store and retrieve memory context. Bedrock LLM models can be used to interpret given conversations. Neptune Analytics can store the text chunks in a graph format with relationship entities.