Agents today store facts. Mnemebrain stores beliefs — with evidence, confidence, provenance, and revision.
So your agents can explain decisions, detect contradictions, and learn from outcomes. Without retraining.
Facts are stored without evidence or provenance. Ask your agent why it thinks something — it can't tell you. No justification graph. No audit trail.
When new information conflicts with old, most systems just overwrite. No contradiction detection. No revision logic. Beliefs become internally inconsistent, invisibly.
Enterprise teams need auditable AI. But when memory is a vector store, there's nothing to audit. Compliance and explainability are impossible to retrofit.
Every session starts from zero. Reasoning is reconstructed from scratch. No feedback loop, no confidence adjustment, no improvement without full retraining.
# 1. Store a belief with full provenanceb = believe( claim="User prefers Italian restaurants", evidence=["conv_12", "conv_17"], confidence=0.82)# 2. Contradicting evidence arrivesbelieve( claim="User switched to low-carb diet", evidence=["conv_21"] )# 3. Trigger AGM-minimal revisionrevise(b)# → confidence propagates via Dempster-Shafer# → belief scope narrows automatically# → full provenance preserved# 4. Audit the belief at any timeexplain("User prefers Italian restaurants")→ supported_by: conv_12, conv_17→ weakened_by: conv_21→ confidence: 0.82 → 0.61→ revised_scope: "Italian (excl. low-carb)"# 5. Close the loop with outcomesfeedback(episode_id="ep_44", outcome=ACCEPTED)→ confidence updated: 0.61 → 0.68→ reasoning episode stored as QueryNode→ agent improves without retraining
Every claim is stored with evidence, confidence, and provenance. The agent always knows what supports a belief — not just that it exists.
When new evidence conflicts, revise() applies the minimal change required to restore consistency. Confidence propagates automatically to all downstream beliefs via Dempster-Shafer fusion.
Any belief can be traced back through its justification graph. Every revision, every piece of supporting evidence, every confidence delta — on record permanently.
feedback() stores the reasoning episode as a QueryNode and propagates outcome signal back to adjust confidence. The agent measurably improves over time — no retraining.
Every time an agent answers "What does the user want?" or "What should I do next?", it reconstructs that reasoning from scratch. The tokens disappear. The logic is lost.
QueryNodes change this. Each reasoning episode — the question asked, the belief subgraph consulted, the answer produced, and the real-world outcome — is stored as a first-class node in the graph.
Over time, the agent accumulates reasoning competence. Not just memory. A substrate of how it thinks, reusable and improvable.
Each claim is a node in a directed justification DAG with evidence, confidence, and provenance. Not a plain fact — a structured belief with causal history.
Dempster-Shafer fusion propagates confidence across the entire belief graph. When evidence changes, every downstream belief updates. The agent always knows how sure it is — and why that changed.
Background episodic compression during idle cycles. Conversations become long-term beliefs without manual intervention. Inspired by sleep slow-wave replay (SWRs) in the hippocampus.
HippoRAG-style sparse pointers enable multi-hop retrieval across the belief graph. Traverses justification chains and associative paths — not just nearest-neighbor cosine.
Recall opens a lability window. When retrieved context conflicts with stored belief, revise() triggers AGM-minimal belief revision. The memory updates — not just the output.
feedback() stores reasoning episodes as QueryNodes. Real-world outcomes propagate back to adjust confidence. GoalNodes persist objectives across sessions. The agent compounds competence.
"Agents need belief systems, not just memory systems. Belief implies justification. Justification implies revision. Revision implies learning."
Every decision your agent makes can be traced through the justification graph. Compliance, explainability, and accountability — built into memory, not bolted on top. The audit trail is the architecture.
When a user's preferences change, the agent doesn't just overwrite — it revises. Evidence from new sessions updates confidence in old beliefs, with full provenance preserved.
ATTACKS edges in the belief graph automatically flag when new evidence conflicts with established knowledge. Research agents that surface uncertainty, not bury it.
CRDT-based belief replication lets multiple agents share and diverge on knowledge without coordination bottlenecks. Memory as distributed, consistent infrastructure.
Mnemebrain is built on a complete, versioned engineering specification — covering the belief data model, all four core operations, the consolidation pipeline, retrieval layer, QueryNodes, GoalNodes, and integration API.
If you're evaluating whether this is the right infrastructure for your system, start here. No sales call required.
Early access open for teams building serious agent infrastructure.