Claude Code’s KAIROS autonomous agent just leaked — Graphonomous is the open-source version that’s already more capable. See the comparison ↓

Knowledge graphs that
know when to think

Graphonomous gives AI agents real memory that persists, learns, and knows when its own knowledge has circular dependencies. It’s an open-source MCP server — plug it into Claude, ChatGPT, Cursor, or any model.

v0.2.0 · 22 MCP Tools · 6 Node Types · κ-Routing · Elixir/OTP · MCP Server · Apache 2.0

The κ routing + deliberation + attention stack

The system analyzed a 4-node business cycle, routed reasoning depth automatically, and now supports both topology-aware deliberation and proactive attention cycles with model-tier adaptation.

DAG Region (κ = 0)

routing:    fast
max_kappa:  0
action:     Single-pass retrieval.
            No deliberation needed.

SCC Region (κ > 0)

routing:    deliberate
max_kappa:  1
scc_count:  1
fault_line: Product Quality → Market Share
budget:     max_iterations: 2, agents: 1,
            confidence: 0.75
MCP Tool: topology_analyze — Input: 4 business cycle nodes
{
  "routing": "deliberate",
  "max_kappa": 1,
  "scc_count": 1,
  "sccs": [{
    "id": "scc-0",
    "nodes": ["market-share", "revenue", "r-and-d", "product-quality"],
    "kappa": 1,
    "approximate": false,
    "fault_line_edges": [{
      "source": "product-quality",
      "target": "market-share"
    }],
    "routing": "deliberate",
    "deliberation_budget": {
      "max_iterations": 2,
      "agent_count": 1,
      "timeout_multiplier": 1.5,
      "confidence_threshold": 0.75
    }
  }],
  "dag_nodes": []
}

Live result from Graphonomous MCP server. The system detected a circular dependency between market share, revenue, R&D, and product quality — and identified the exact edge (Product Quality → Market Share) where the feedback loop is weakest. No other agent memory system does this.


How it works

Store

Agents store episodic, semantic, and procedural knowledge as typed graph nodes with confidence scores and provenance. Edges capture causal, temporal, and associative relationships.

Analyze

On every retrieval, Graphonomous computes the topological structure of the relevant subgraph. Tarjan's SCC algorithm detects circular dependencies. The κ invariant measures entanglement depth.

Route

κ = 0 → fast retrieval. On constrained tiers, low-κ regions can be enriched without full deliberation. Higher-friction regions route to deliberate (decompose fault lines, reconcile, write conclusions back). Attention then prioritizes what to do next under autonomy and budget controls.


Available tools

Tool Description
store_node Persist knowledge nodes with type, confidence, metadata
store_edge Create directed relationships between nodes (16 edge types, default weight 0.3)
delete_node Remove a node and its connected edges
manage_edge Edge lifecycle — list, update weight/decay, delete
retrieve_context Semantic search + neighborhood expansion + topology annotations + κ-aware routing
query_graph List, filter, similarity search across the graph
topology_analyze Compute SCCs, κ values, routing decision, fault-line edges
graph_traverse BFS walk with depth and relationship filters
graph_stats Aggregate counts, type distributions, confidence stats, orphan detection
retrieve_episodic Time-range filtered episodic node retrieval
retrieve_procedural Semantic search scoped to procedural how-to nodes
coverage_query Standalone epistemic coverage — act/learn/escalate decision
learn_from_outcome Update confidence across causal chains from grounded outcomes
learn_from_feedback Positive/negative/correction feedback on nodes
learn_detect_novelty Similarity-based novelty scoring for new concepts
learn_from_interaction Full pipeline: novelty → store → extract claims → link
deliberate κ-driven focused reasoning over cyclic regions with optional crystallization
manage_goal Goal lifecycle — create, transition, link nodes, set progress
review_goal Coverage-driven decision gate for goals
run_consolidation 7-stage pipeline: decay, prune, strengthen, merge, promote, abstract
attention_survey Ranked attention map across goals, coverage, and topology signals
attention_run_cycle Trigger one survey/triage/dispatch attention cycle with autonomy override

What makes this different

Every agent memory system retrieves context. Graphonomous is the only one that tells you the shape of what you retrieved.

  • Claude Code KAIROS (leaked March 2026) uses a single memory timescale with flat key-value context. Graphonomous uses 4 timescales (working → short → long → consolidated) with a typed knowledge graph. KAIROS is locked to Claude. Graphonomous is MCP-native — works with any model.
  • Mem0 stores facts with smart updates. It doesn’t detect circular dependencies.
  • Zep / Graphiti builds temporal knowledge graphs. It doesn’t route inference based on topology.
  • Letta (MemGPT) pages memory in and out of context. It doesn’t know when context is tangled.

Graphonomous computes κ — a proved graph-theoretic invariant — on every retrieval. When your knowledge has feedback loops, the system tells you exactly where they are and how to reason through them.


Under the hood


Proved theory

The κ invariant is proved on 1,926,351 finite systems with zero counterexamples. The proof is browser-runnable at opensentience.org.

The theoretical foundations, deliberation protocol, attention engine, and governance model are published as open research protocols OS-001 through OS-008.

The first empirical evaluation (OS-E001) benchmarks the full engine on 18,165 files across 14 projects: 12,880 edges, 22 SCCs, κ=27, graph beats flat retrieval (+0.103 recall), 100% test pass rate across all 22 MCP tools. Raw data and reproduction scripts included.

OpenSentience · OS-E001 Benchmark · Ampersand Box Design · [&] Protocol Spec