Skip to content

Memory Vault

The memory vault — where knowledge goes to be preserved, indexed, and occasionally forgotten on purpose

The Memory Vault is Sanctum’s shared knowledge base, inspired by how human memory works — which is to say, it forgets things on purpose and considers this a feature. It uses a three-layer architecture — working memory, short-term, and long-term — with a daily consolidation process that acts like sleep: distilling raw observations into lasting knowledge.

At some point during development, we realized we’d built a system that remembers where you left your SSH keys, forgets trivial session data, and dreams at 4 AM. The horror movie writes itself.

Sanctum runs a four-layer memory stack. Each layer does one thing well. Together they give the house a memory system more sophisticated than most startups’ data architecture.

LayerRoleBackendCost
Mem0Primary cross-agent semantic memorymem0.ai via Rube/ComposioFree
Memory VaultGit-native versioned source of truthsanctum-memory repoFree
Neo4j / GraphitiEntity relationships & graph queriesLocal, port 31416Free (self-hosted)
SupermemoryLegacy cross-LLM bridge (secondary)supermemory.ai MCPConnected but secondary

When Mem0 and the vault disagree, the vault wins — because Git has receipts. When the vault and the graph disagree, someone has a long night ahead of them. This has happened twice.

┌─────────────────────────────────────────────────────┐
│ LAYER 1: WORKING MEMORY (per-session, ephemeral) │
│ Agent's context window. Dies when session ends. │
├─────────────────────────────────────────────────────┤
│ LAYER 2: MEM0 (cross-agent, persistent, free) │
│ Semantic search + structured queries via Rube. │
│ project: sanctum-memory / user: Ogilthorp3 │
├─────────────────────────────────────────────────────┤
│ LAYER 3: SHORT-TERM MEMORY (inbox/, days-weeks) │
│ inbox/{agent}/ — raw observations, session notes │
│ TTL: 7-30 days. Consolidated or discarded daily. │
├─────────────────────────────────────────────────────┤
│ LAYER 4: LONG-TERM MEMORY (consolidated, permanent)│
│ knowledge/{topic}/ — semantic facts (timeless) │
│ events/YYYY/MM/ — significant episodes │
│ procedures/ — how-to, runbooks │
│ Neo4j/Graphiti — entity relationship graph │
└─────────────────────────────────────────────────────┘
~/.sanctum/memory/
├── inbox/ # Short-term, per-agent
│ ├── claude-code/
│ ├── gemini-cli/
│ ├── openclaw/
│ └── home-assistant/
├── knowledge/ # Long-term semantic
│ ├── devices/
│ ├── network/
│ ├── systems/
│ ├── people/
│ └── preferences/
├── events/ # Long-term episodic
│ └── 2026/03/
├── procedures/ # Long-term procedural
│ ├── troubleshooting/
│ ├── maintenance/
│ └── automation/
├── archive/ # Expired, retained 90 days
└── meta/ # Schema, consolidation logs

Every note has standardized YAML frontmatter. Opinions may vary on whether your house needs a metadata schema for its thoughts. Opinions are wrong.

---
type: semantic | episodic | procedural | observation | session_summary
source_agent: claude-code | gemini-cli | openclaw | user | system
created: 2026-03-20T17:00:00Z
updated: 2026-03-20T17:00:00Z
last_accessed: 2026-03-20T17:00:00Z
access_count: 3
importance: 0.85
consolidation_status: raw | reviewed | consolidated | archived
ttl_days: null # null = permanent, or integer days
superseded_by: null # path to newer version
tags: [network, infrastructure]
related_entities: [mac-mini, ubuntu-vm]
---
# Note Title
Content with [[wikilinks]] to related notes...

Every note has a computed importance score (0.0 to 1.0) that determines its TTL. The vault is, in essence, deciding what’s worth remembering. A power you’d think we’d reserve for sentient beings, but here we are.

ImportanceTTLNotes
> 0.8PermanentCore knowledge, user-stated facts
0.5 - 0.890 daysAgent-observed patterns
0.3 - 0.530 daysSingle observations
< 0.37 daysEphemeral session data

Score formula: source_weight x recency x access_frequency x link_density

Notes accessed 5+ times are protected from expiry regardless of score. If the system keeps coming back to a memory, there’s probably a reason. Even if that reason is paranoia.

The consolidation engine runs daily at 4:17 AM (LaunchAgent: com.sanctum.memory-consolidate). Like sleep for the brain, except it runs on schedule, never hits snooze, and files a report when it’s done.

  1. Scan inbox for notes older than 24 hours
  2. Deduplicate against existing long-term knowledge
  3. Classify and move to the appropriate long-term directory
  4. Recompute importance scores for all active notes
  5. Expire notes that have exceeded their TTL
  6. Push to Mem0 — consolidated knowledge synced to Mem0 for cross-agent access
  7. Clean archive — delete archived notes older than 90 days
  8. Generate report in meta/consolidation-log.md

Any agent can trigger consolidation manually via the memory_consolidate MCP tool.

To prevent bloat, each layer has a maximum note count. Without limits, a system that remembers everything eventually remembers nothing useful — a problem familiar to anyone who’s ever hoarded browser tabs.

LayerMax NotesAction at Cap
Inbox300Emergency consolidation
Knowledge1000Merge related notes
Events500Summarize old episodes
Procedures200Merge similar runbooks
ToolDescription
memory_searchFull-text search with tag and folder filters
memory_readRead a note (auto-tracks access)
memory_writeCreate/update with schema enforcement
memory_deleteRemove a note
memory_listList notes by folder, tag, or type
memory_linksTraverse the wikilink graph
memory_consolidateTrigger consolidation (dry-run by default)
memory_healthVault health metrics dashboard

Mem0 is the primary cross-agent memory layer, accessed via Rube/Composio. It’s free, and every recipe and automation that needs Sanctum context queries it first.

SettingValue
Providermem0.ai
AccessRube/Composio (MEM0_* tools)
Projectsanctum-memory
User IDOgilthorp3
CostFree
ToolDescription
MEM0_ADD_NEW_MEMORY_RECORDSStore new memories with user/project scope
MEM0_PERFORM_SEMANTIC_SEARCH_ON_MEMORIESNatural language search across all memories
MEM0_RETRIEVE_MEMORY_LISTList memories with pagination and filters
MEM0_SEARCH_MEMORIES_WITH_QUERY_FILTERSStructured search with metadata filters
MEM0_UPDATE_MEMORY_DETAILS_BY_IDUpdate existing memory content
ToolVault AccessMem0 Access
Claude CodeMCP stdio (port 42069)Via Rube/Composio
Gemini CLIMCP stdioVia Rube/Composio
OpenClaw agentsSSH skill (vault.sh)Via API
Rube recipesGitHub toolsNative (Composio MEM0_*)
ObsidianFile system (~/.sanctum/memory/)N/A
  • If derivable from code, logs, or sensors — don’t store it
  • If the same fact is stated multiple ways — dedup into one
  • If never accessed in 30 days and importance < 0.5 — archive
  • Raw data goes to time-series DBs, not memory
  • Every episodic memory needs context (who, what, when, why)
  • Every semantic memory should be self-contained