The only agent memory that detects circular dependencies in your knowledge, routes reasoning depth automatically, revises beliefs formally, and forgets on command. Open-source MCP server — works with any model.
v0.3.1 · 28 MCP Tools · 6 Node Types · κ-Routing · GraphMemBench 120/120 · Elixir/OTP · Apache 2.0The system analyzed a 4-node business cycle, routed reasoning depth automatically, and now supports both topology-aware deliberation and proactive attention cycles with model-tier adaptation.
routing: fast max_kappa: 0 action: Single-pass retrieval. No deliberation needed.
routing: deliberate max_kappa: 1 scc_count: 1 fault_line: Product Quality → Market Share budget: max_iterations: 2, agents: 1, confidence: 0.75
{
"routing": "deliberate",
"max_kappa": 1,
"scc_count": 1,
"sccs": [{
"id": "scc-0",
"nodes": ["market-share", "revenue", "r-and-d", "product-quality"],
"kappa": 1,
"approximate": false,
"fault_line_edges": [{
"source": "product-quality",
"target": "market-share"
}],
"routing": "deliberate",
"deliberation_budget": {
"max_iterations": 2,
"agent_count": 1,
"timeout_multiplier": 1.5,
"confidence_threshold": 0.75
}
}],
"dag_nodes": []
}
Live result from Graphonomous MCP server. The system detected a circular dependency between market share, revenue, R&D, and product quality — and identified the exact edge (Product Quality → Market Share) where the feedback loop is weakest. No other agent memory system does this.
Every agent memory system retrieves context. These six capabilities go beyond retrieval — each one is demonstrated with real MCP payloads.
Detects circular dependencies via Tarjan’s algorithm. Routes κ=0 regions to fast retrieval, κ>0 to deliberation with computed budgets.
AGM-rational expand/revise/contract. Detects contradictions automatically and propagates confidence changes through dependency chains.
Soft hide, cascade delete, policy-based pruning, and GDPR Article 17 permanent erasure with audit trail. Three forgetting modes in one tool surface.
Wilson score intervals reveal where one more piece of evidence would most reduce uncertainty. The system directs its own investigation priorities.
Proactive goal prioritization combining urgency, coverage gaps, and topology-aware reasoning depth. Survey → triage → dispatch with autonomy controls.
Retrieval rankings shift based on outcome feedback. High-confidence but low-utility knowledge drops in ranking automatically. Self-correcting memory.
Agents store episodic, semantic, and procedural knowledge as typed graph nodes with confidence scores and provenance. Edges capture causal, temporal, and associative relationships.
On every retrieval, Graphonomous computes the topological structure of the relevant subgraph. Tarjan's SCC algorithm detects circular dependencies. The κ invariant measures entanglement depth.
κ = 0 → fast retrieval. On constrained tiers, low-κ regions can be enriched without full deliberation. Higher-friction regions route to deliberate (decompose fault lines, reconcile, write conclusions back). Attention then prioritizes what to do next under autonomy and budget controls.
| Tool | Description |
|---|---|
| store_node | Persist knowledge nodes with type, confidence, metadata |
| store_edge | Create directed relationships between nodes (16 edge types, default weight 0.3) |
| delete_node | Remove a node and its connected edges |
| manage_edge | Edge lifecycle — list, update weight/decay, delete |
| retrieve_context | Semantic search + neighborhood expansion + topology annotations + κ-aware routing |
| query_graph | List, filter, similarity search across the graph |
| topology_analyze | Compute SCCs, κ values, routing decision, fault-line edges |
| graph_traverse | BFS walk with depth and relationship filters |
| graph_stats | Aggregate counts, type distributions, confidence stats, orphan detection |
| retrieve_episodic | Time-range filtered episodic node retrieval |
| retrieve_procedural | Semantic search scoped to procedural how-to nodes |
| coverage_query | Standalone epistemic coverage — act/learn/escalate decision |
| learn_from_outcome | Update confidence across causal chains from grounded outcomes |
| learn_from_feedback | Positive/negative/correction feedback on nodes |
| learn_detect_novelty | Similarity-based novelty scoring for new concepts |
| learn_from_interaction | Full pipeline: novelty → store → extract claims → link |
| deliberate | κ-driven focused reasoning over cyclic regions with optional crystallization |
| manage_goal | Goal lifecycle — create, transition, link nodes, set progress |
| review_goal | Coverage-driven decision gate for goals |
| run_consolidation | 7-stage pipeline: decay, prune, strengthen, merge, promote, abstract |
| attention_survey | Ranked attention map across goals, coverage, and topology signals |
| attention_run_cycle | Trigger one survey/triage/dispatch attention cycle with autonomy override |
| belief_revise | AGM-style expand/revise/contract with confidence propagation through dependency graphs |
| belief_contradictions | Detect contradictions via semantic similarity + negation markers |
| epistemic_frontier | Wilson score intervals — find where evidence would most reduce uncertainty |
| forget_node | Soft (hide), hard (delete), or cascade (delete + orphans) forgetting modes |
| forget_by_policy | Hybrid LRU + priority-decay automatic pruning with dry-run preview |
| gdpr_erase | GDPR Article 17 permanent deletion with audit trail |
Every agent memory system retrieves context. Graphonomous is the only one that tells you the shape of what you retrieved.
Graphonomous computes κ — a proved graph-theoretic invariant — on every retrieval. When your knowledge has feedback loops, the system tells you exactly where they are and how to reason through them.
GDPR-compliant forgetting is a unique differentiator. No other agent memory system offers soft forget, cascade delete, policy-based pruning, and permanent audit-logged erasure (Article 17) in one tool surface.
The κ invariant is proved on 1,926,351 finite systems with zero counterexamples. The proof is browser-runnable at opensentience.org.
The theoretical foundations, deliberation protocol, attention engine, and governance model are published as open research protocols OS-001 through OS-008.
The first empirical evaluation (OS-E001) benchmarks the full engine on 18,165 files across 14 projects: 12,880 edges, 22 SCCs, κ=27, graph beats flat retrieval (+0.103 recall), 100% test pass rate across all 28 MCP tools. Raw data and reproduction scripts included.
GraphMemBench is a 120-scenario capability validation suite across 15 categories, testing every continual learning capability from κ activation to GDPR forgetting. All scenarios pass with 309 tests and 0 failures.
Phase 1 (Foundation):
Kappa Activation · Belief Revision · Conflict-Aware Consolidation · Two-Phase Retrieval · Intentional Forgetting
Phase 2 (Advanced):
Uncertainty Propagation · Procedural Retrieval · Multi-Agent Prep · Integration Scenarios · Stress
Phase 3 (Causal):
Causal Metadata · E2E Workflows · Regression Guards · Competitor Adapters · Reporting
Competitor adapter interface validates against 5 implementations: Graphonomous (live), Baseline, Mem0, Zep, and Hindsight stubs.
Belief Revision — AGM-style expand/revise/contract with automatic contradiction detection and confidence propagation through dependency graphs.
Intentional Forgetting — Soft, hard, and cascade modes plus hybrid LRU+priority-decay policy pruning and GDPR Article 17 erasure with audit trails.
Epistemic Frontier — Wilson score intervals identify where one more piece of evidence would most reduce uncertainty. Information gain ranking for research prioritization.
Causal Edge Metadata — causal_strength, confounders, and intervention_history on edges, updated automatically during outcome learning.
Hybrid Retrieval — nomic-embed-text-v1.5 (768d) + BM25 via SQLite FTS5 + cross-encoder reranking. Estimated +6–14pp SHR lift over v0.2.