docs/user_guide/en/modules/memory.md
This document explains DevAll's memory system: memory list config, built-in store implementations, how agent nodes attach memories, and troubleshooting tips. Core code lives in entity/configs/memory.py and node/agent/memory/*.py.
memory[] in YAML with name, type, and config. Types are registered via register_memory_store() and point to concrete implementations.AgentConfig.memories. Each MemoryAttachmentConfig defines read/write strategy and retrieval stages.load(), retrieve(), update(), save().SimpleMemoryConfig and FileMemoryConfig embed EmbeddingConfig, and EmbeddingFactory instantiates OpenAI or local vector models.memory:
- name: convo_cache
type: simple
config:
memory_path: WareHouse/shared/simple.json
embedding:
provider: openai
model: text-embedding-3-small
api_key: ${API_KEY}
- name: project_docs
type: file
config:
index_path: WareHouse/index/project_docs.json
file_sources:
- path: docs/
file_types: [".md", ".mdx"]
recursive: true
embedding:
provider: openai
model: text-embedding-3-small
| Type | Path | Highlights | Best for |
|---|---|---|---|
simple | node/agent/memory/simple_memory.py | Optional disk persistence (JSON) after runs; FAISS + semantic rerank; read/write capable. | Small conversation history, prototypes. |
file | node/agent/memory/file_memory.py | Chunks files/dirs into a vector index, read-only, auto rebuilds when files change. | Knowledge bases, doc QA. |
blackboard | node/agent/memory/blackboard_memory.py | Lightweight append-only log trimmed by time/count; no vector search. | Broadcast boards, pipeline debugging. |
All stores register through register_memory_store() so summaries show up in UI via MemoryStoreConfig.field_specs().
| Field | Description |
|---|---|
name | Target Memory Store name (must be unique inside stores[]). |
retrieve_stage | Optional list limiting retrieval to certain AgentExecFlowStage values (pre, plan, gen, critique, etc.). Empty means all stages. |
top_k | Number of items per retrieval (default 3). |
similarity_threshold | Minimum similarity cutoff (-1 disables filtering). |
read / write | Whether this node can read from / write back to the store. |
Agent node example:
nodes:
- id: answer
type: agent
config:
provider: openai
model: gpt-4o-mini
prompt_template: answer_user
memories:
- name: convo_cache
retrieve_stage: ["gen"]
top_k: 5
read: true
write: true
- name: project_docs
read: true
write: false
Execution order:
gen, MemoryManager iterates attachments.read=true call retrieve() on their store.write=true call update() and optionally save().All memory stores persist a unified MemoryItem structure containing:
content_summary – trimmed text used for embedding search.input_snapshot / output_snapshot – serialized message blocks (with base64 attachments) preserving multimodal context.metadata – store-specific telemetry (role, previews, attachment IDs, etc.).
This schema lets multimodal outputs flow into Memory/Thinking modules without extra plumbing.SimpleMemoryConfig.memory_path (or auto). Defaults to in-memory.IndexFlatIP, then apply semantic rerank (Jaccard/LCS).update() builds a MemoryContentSnapshot (text + blocks) for both input/output, deduplicates via hashed summary, embeds the summary, and stores the snapshots/attachments metadata.max_content_length, top_k, and similarity_threshold to avoid irrelevant context.file_sources entry (paths, suffix filters, recursion, encoding). index_path is mandatory for incremental updates.file_metadata.update() unsupported.load() checks file hashes and rebuilds if needed. Store index_path on persistent storage.memory_path (or auto) plus max_items. Creates the file in the session directory if missing.top_k entries ordered by time.update() appends the latest snapshot (input/output blocks, attachments, previews). No embeddings are generated, so retrieval is purely recency-based.provider, model, api_key, base_url, params.provider=openai uses the official client; override base_url for compatibility layers.params can include use_chunking, chunk_strategy, max_length, etc.provider=local expects params.model_path and depends on sentence-transformers.memory[] names. Duplicates raise ConfigError.SimpleMemory without embeddings downgrades to append-only; FileMemory errors out. Provide an embedding config whenever semantic search is required.memory_path/index_path are writable. Mount volumes when running inside containers.FileMemory indexes offline, use retrieve_stage to limit retrieval frequency, and tune top_k/similarity_threshold to balance recall vs. token cost.MemoryBase).register_memory_store("my_store", config_cls=..., factory=..., summary="...") in node/agent/memory/registry.py.FIELD_SPECS, then run python -m tools.export_design_template ... so the frontend picks up the enum.