Belief infrastructure for AI agents

Give your AI
a real brain.

Agents today store facts. Mnemebrain stores beliefs — with evidence, confidence, provenance, and revision.

So your agents can explain decisions, detect contradictions, and learn from outcomes. Without retraining.

Read the architecture →
6Core primitives
0Competitors with this
Belief revision depth
User Prefs
conf: 0.87
Context A
conf: 0.73
Goal State
conf: 0.91
Evidence
conf: 0.6
Revision
conf: 0.95

Agents are broken
at the memory layer.

They forget why they believe things

Facts are stored without evidence or provenance. Ask your agent why it thinks something — it can't tell you. No justification graph. No audit trail.

They contradict themselves silently

When new information conflicts with old, most systems just overwrite. No contradiction detection. No revision logic. Beliefs become internally inconsistent, invisibly.

They can't explain decisions

Enterprise teams need auditable AI. But when memory is a vector store, there's nothing to audit. Compliance and explainability are impossible to retrofit.

They don't learn from outcomes

Every session starts from zero. Reasoning is reconstructed from scratch. No feedback loop, no confidence adjustment, no improvement without full retraining.

The same conversation.
A fundamentally different result.

Without Mnemebrain
Do I like Italian food?
Yes, you like Italian food.
I told you last week I stopped eating carbs.
I apologize for the confusion. I've updated that.
Why did you think that in the first place?
I don't have enough context to answer that.
With Mnemebrain
Do I like Italian food?
belief: "user likes Italian food"
confidence: 0.82
evidence: conv_12, conv_17
I stopped eating carbs last week.
// revise() triggered
belief: "Italian (excl. low-carb)"
confidence: 0.82 → 0.61
new_evidence: conv_21
Why did you think that originally?
// explain() output
supported_by: conv_12 // "love this pasta place"
supported_by: conv_17 // booked Italian 3×
Six primitives. The complete API for a thinking agent.
believe(claim, evidence, source)
retract(belief_id)
explain(belief_id)
revise(belief_id, new_context)
ask(query)
feedback(episode_id, outcome)

This is a reasoning system,
not a vector store.

# 1. Store a belief with full provenanceb = believe( claim="User prefers Italian restaurants", evidence=["conv_12", "conv_17"], confidence=0.82)# 2. Contradicting evidence arrivesbelieve( claim="User switched to low-carb diet", evidence=["conv_21"] )# 3. Trigger AGM-minimal revisionrevise(b)# → confidence propagates via Dempster-Shafer# → belief scope narrows automatically# → full provenance preserved# 4. Audit the belief at any timeexplain("User prefers Italian restaurants")→ supported_by:   conv_12, conv_17→ weakened_by:    conv_21→ confidence:     0.82 → 0.61→ revised_scope:  "Italian (excl. low-carb)"# 5. Close the loop with outcomesfeedback(episode_id="ep_44", outcome=ACCEPTED)→ confidence updated: 0.61 → 0.68→ reasoning episode stored as QueryNode→ agent improves without retraining
Step 01

Structured beliefs, not plain facts

Every claim is stored with evidence, confidence, and provenance. The agent always knows what supports a belief — not just that it exists.

Steps 02 – 03

AGM-minimal belief revision

When new evidence conflicts, revise() applies the minimal change required to restore consistency. Confidence propagates automatically to all downstream beliefs via Dempster-Shafer fusion.

Step 04

Full audit trail via explain()

Any belief can be traced back through its justification graph. Every revision, every piece of supporting evidence, every confidence delta — on record permanently.

Step 05

Outcomes feed back to beliefs

feedback() stores the reasoning episode as a QueryNode and propagates outcome signal back to adjust confidence. The agent measurably improves over time — no retraining.

Most systems store what agents know.
We also store how they reasoned.

Every time an agent answers "What does the user want?" or "What should I do next?", it reconstructs that reasoning from scratch. The tokens disappear. The logic is lost.

QueryNodes change this. Each reasoning episode — the question asked, the belief subgraph consulted, the answer produced, and the real-world outcome — is stored as a first-class node in the graph.

Over time, the agent accumulates reasoning competence. Not just memory. A substrate of how it thinks, reusable and improvable.

Belief graph → what the agent knows
QueryNodes → how it reasoned
GoalNodes → what it's trying to achieve

Together: an agent that improves at reasoning, not just retrieval.
QueryNode: "Where should I eat tonight?"
type: RECOMMENDATION · stored: 2026-03-05
↓ QUERY_USES
Belief: user is vegetarian (conf: 0.91)
evidence: conv_08, conv_14
↓ QUERY_USES
Belief: user likes Italian with others (conf: 0.74)
evidence: conv_45, restaurant_search_3
↓ QUERY_PRODUCED
Answer: "Recommend Grano e Sale" (conf: 0.72)
↓ OUTCOME_FOR
OutcomeNode: ACCEPTED
confidence adjusted: 0.72 → 0.81

Nothing else does all of this.

Capability
Mem0 / Zep
LangGraph Memory
Mnemebrain
Evidence tracking
✓ Full provenance
Confidence scores
✓ Dempster-Shafer
Belief revision
✓ AGM minimal change
Explanation / audit trail
✓ Graph traversal
Contradiction detection
✓ ATTACKS edges
Reasoning memory (QueryNodes)
✓ Full episode storage
Multi-hop retrieval
Cosine only
Cosine only
✓ HippoRAG PageRank
Learns from outcomes
✓ feedback() loop

Grounded in neuroscience,
built for production.

01

Belief Graph

→ Agent always knows WHY it believes something

Each claim is a node in a directed justification DAG with evidence, confidence, and provenance. Not a plain fact — a structured belief with causal history.

Biological: Associative cortex
02

Confidence Engine

→ Uncertainty propagates automatically

Dempster-Shafer fusion propagates confidence across the entire belief graph. When evidence changes, every downstream belief updates. The agent always knows how sure it is — and why that changed.

Biological: Dopaminergic signals
03

Consolidation Daemon

→ Conversations compress into durable knowledge

Background episodic compression during idle cycles. Conversations become long-term beliefs without manual intervention. Inspired by sleep slow-wave replay (SWRs) in the hippocampus.

Biological: Sleep replay (SWRs)
04

Hippocampal Index

→ Contextually relevant, not just lexically similar

HippoRAG-style sparse pointers enable multi-hop retrieval across the belief graph. Traverses justification chains and associative paths — not just nearest-neighbor cosine.

Biological: Dentate gyrus indexing
05

Reconsolidation

→ Retrieval itself can trigger belief update

Recall opens a lability window. When retrieved context conflicts with stored belief, revise() triggers AGM-minimal belief revision. The memory updates — not just the output.

Biological: Memory lability on recall
06

Outcome Feedback Loop

→ Agent measurably improves without retraining

feedback() stores reasoning episodes as QueryNodes. Real-world outcomes propagate back to adjust confidence. GoalNodes persist objectives across sessions. The agent compounds competence.

Biological: Reward-based plasticity
"Agents need belief systems, not just memory systems. Belief implies justification. Justification implies revision. Revision implies learning."
— Mnemebrain Architecture Thesis, 2026

AI agents that can
explain every decision.

Enterprise AI

Auditable reasoning chains

Every decision your agent makes can be traced through the justification graph. Compliance, explainability, and accountability — built into memory, not bolted on top. The audit trail is the architecture.

Personal Assistants

Beliefs that evolve with context

When a user's preferences change, the agent doesn't just overwrite — it revises. Evidence from new sessions updates confidence in old beliefs, with full provenance preserved.

Research Agents

Contradiction detection at scale

ATTACKS edges in the belief graph automatically flag when new evidence conflicts with established knowledge. Research agents that surface uncertainty, not bury it.

Multi-Agent Systems

Conflict-free belief sharing

CRDT-based belief replication lets multiple agents share and diverge on knowledge without coordination bottlenecks. Memory as distributed, consistent infrastructure.

The full architecture
is public.

Mnemebrain is built on a complete, versioned engineering specification — covering the belief data model, all four core operations, the consolidation pipeline, retrieval layer, QueryNodes, GoalNodes, and integration API.

If you're evaluating whether this is the right infrastructure for your system, start here. No sales call required.

Architecture spec — table of contents
01Why existing systems failcore
02Belief node data modelcore
03Four core operationscore
04Confidence propagation
05Consolidation daemon
06Hippocampal retrieval index
07Reconsolidation on recall
08QueryNodes — reasoning memorynew
09GoalNodes & PolicyNodesnew
10Multi-agent CRDT replication
11Integration API

Your agents are ready
to actually learn.

Early access open for teams building serious agent infrastructure.

Read the technical spec