Knowledge graphs that
know when to think

The only agent memory that detects circular dependencies in your knowledge, routes reasoning depth automatically, revises beliefs formally, and forgets on command. Open-source MCP server — works with any model.

v0.3.1 · 28 MCP Tools · 6 Node Types · κ-Routing · GraphMemBench 120/120 · Elixir/OTP · Apache 2.0

The κ routing + deliberation + attention stack

The system analyzed a 4-node business cycle, routed reasoning depth automatically, and now supports both topology-aware deliberation and proactive attention cycles with model-tier adaptation.

DAG Region (κ = 0)

routing:    fast
max_kappa:  0
action:     Single-pass retrieval.
            No deliberation needed.

SCC Region (κ > 0)

routing:    deliberate
max_kappa:  1
scc_count:  1
fault_line: Product Quality → Market Share
budget:     max_iterations: 2, agents: 1,
            confidence: 0.75
MCP Tool: topology_analyze — Input: 4 business cycle nodes
{
  "routing": "deliberate",
  "max_kappa": 1,
  "scc_count": 1,
  "sccs": [{
    "id": "scc-0",
    "nodes": ["market-share", "revenue", "r-and-d", "product-quality"],
    "kappa": 1,
    "approximate": false,
    "fault_line_edges": [{
      "source": "product-quality",
      "target": "market-share"
    }],
    "routing": "deliberate",
    "deliberation_budget": {
      "max_iterations": 2,
      "agent_count": 1,
      "timeout_multiplier": 1.5,
      "confidence_threshold": 0.75
    }
  }],
  "dag_nodes": []
}

Live result from Graphonomous MCP server. The system detected a circular dependency between market share, revenue, R&D, and product quality — and identified the exact edge (Product Quality → Market Share) where the feedback loop is weakest. No other agent memory system does this.


Capabilities no other agent memory has

Every agent memory system retrieves context. These six capabilities go beyond retrieval — each one is demonstrated with real MCP payloads.

All 17 interactive demos with real MCP payloads →


How it works

Store

Agents store episodic, semantic, and procedural knowledge as typed graph nodes with confidence scores and provenance. Edges capture causal, temporal, and associative relationships.

Analyze

On every retrieval, Graphonomous computes the topological structure of the relevant subgraph. Tarjan's SCC algorithm detects circular dependencies. The κ invariant measures entanglement depth.

Route

κ = 0 → fast retrieval. On constrained tiers, low-κ regions can be enriched without full deliberation. Higher-friction regions route to deliberate (decompose fault lines, reconcile, write conclusions back). Attention then prioritizes what to do next under autonomy and budget controls.


Available tools

Tool Description
store_node Persist knowledge nodes with type, confidence, metadata
store_edge Create directed relationships between nodes (16 edge types, default weight 0.3)
delete_node Remove a node and its connected edges
manage_edge Edge lifecycle — list, update weight/decay, delete
retrieve_context Semantic search + neighborhood expansion + topology annotations + κ-aware routing
query_graph List, filter, similarity search across the graph
topology_analyze Compute SCCs, κ values, routing decision, fault-line edges
graph_traverse BFS walk with depth and relationship filters
graph_stats Aggregate counts, type distributions, confidence stats, orphan detection
retrieve_episodic Time-range filtered episodic node retrieval
retrieve_procedural Semantic search scoped to procedural how-to nodes
coverage_query Standalone epistemic coverage — act/learn/escalate decision
learn_from_outcome Update confidence across causal chains from grounded outcomes
learn_from_feedback Positive/negative/correction feedback on nodes
learn_detect_novelty Similarity-based novelty scoring for new concepts
learn_from_interaction Full pipeline: novelty → store → extract claims → link
deliberate κ-driven focused reasoning over cyclic regions with optional crystallization
manage_goal Goal lifecycle — create, transition, link nodes, set progress
review_goal Coverage-driven decision gate for goals
run_consolidation 7-stage pipeline: decay, prune, strengthen, merge, promote, abstract
attention_survey Ranked attention map across goals, coverage, and topology signals
attention_run_cycle Trigger one survey/triage/dispatch attention cycle with autonomy override
belief_revise AGM-style expand/revise/contract with confidence propagation through dependency graphs
belief_contradictions Detect contradictions via semantic similarity + negation markers
epistemic_frontier Wilson score intervals — find where evidence would most reduce uncertainty
forget_node Soft (hide), hard (delete), or cascade (delete + orphans) forgetting modes
forget_by_policy Hybrid LRU + priority-decay automatic pruning with dry-run preview
gdpr_erase GDPR Article 17 permanent deletion with audit trail

What makes this different

Every agent memory system retrieves context. Graphonomous is the only one that tells you the shape of what you retrieved.

  • Claude Code KAIROS (leaked March 2026) uses a single memory timescale with flat key-value context. Graphonomous uses 4 timescales (working → short → long → consolidated) with a typed knowledge graph. KAIROS is locked to Claude. Graphonomous is MCP-native — works with any model.
  • Mem0 stores facts with smart updates. It doesn’t detect circular dependencies.
  • Zep / Graphiti builds temporal knowledge graphs. It doesn’t route inference based on topology.
  • Letta (MemGPT) pages memory in and out of context. It doesn’t know when context is tangled.
  • Hindsight (arXiv 2512.12818) uses CARA reflection and multi-signal retrieval. Strong on LongMemEval but cloud-LLM dependent, no topology awareness, no GDPR forgetting, and no belief revision.

Graphonomous computes κ — a proved graph-theoretic invariant — on every retrieval. When your knowledge has feedback loops, the system tells you exactly where they are and how to reason through them.

GDPR-compliant forgetting is a unique differentiator. No other agent memory system offers soft forget, cascade delete, policy-based pruning, and permanent audit-logged erasure (Article 17) in one tool surface.


Under the hood


Proved theory

The κ invariant is proved on 1,926,351 finite systems with zero counterexamples. The proof is browser-runnable at opensentience.org.

The theoretical foundations, deliberation protocol, attention engine, and governance model are published as open research protocols OS-001 through OS-008.

The first empirical evaluation (OS-E001) benchmarks the full engine on 18,165 files across 14 projects: 12,880 edges, 22 SCCs, κ=27, graph beats flat retrieval (+0.103 recall), 100% test pass rate across all 28 MCP tools. Raw data and reproduction scripts included.

GRAPHMEMBENCH v0.3.1 — 120/120 SCENARIOS

GraphMemBench is a 120-scenario capability validation suite across 15 categories, testing every continual learning capability from κ activation to GDPR forgetting. All scenarios pass with 309 tests and 0 failures.

Phase 1 (Foundation): Kappa Activation · Belief Revision · Conflict-Aware Consolidation · Two-Phase Retrieval · Intentional Forgetting
Phase 2 (Advanced): Uncertainty Propagation · Procedural Retrieval · Multi-Agent Prep · Integration Scenarios · Stress
Phase 3 (Causal): Causal Metadata · E2E Workflows · Regression Guards · Competitor Adapters · Reporting

Competitor adapter interface validates against 5 implementations: Graphonomous (live), Baseline, Mem0, Zep, and Hindsight stubs.

v0.3 CAPABILITIES

Belief Revision — AGM-style expand/revise/contract with automatic contradiction detection and confidence propagation through dependency graphs.
Intentional Forgetting — Soft, hard, and cascade modes plus hybrid LRU+priority-decay policy pruning and GDPR Article 17 erasure with audit trails.
Epistemic Frontier — Wilson score intervals identify where one more piece of evidence would most reduce uncertainty. Information gain ranking for research prioritization.
Causal Edge Metadata — causal_strength, confounders, and intervention_history on edges, updated automatically during outcome learning.
Hybrid Retrieval — nomic-embed-text-v1.5 (768d) + BM25 via SQLite FTS5 + cross-encoder reranking. Estimated +6–14pp SHR lift over v0.2.