Autonomy & Agents

Momental does not wait to be asked. Every 15 minutes, agents scan your workspace for signals that need attention. Here is what runs, when, and why.

The Proactive Loop

Most knowledge management tools are passive - they store what you put in and return it when you search. Momental is active. A wakeup loop runs every 15 minutes and triggers a cascade of automated intelligence work:

flowchart TD
  Wake["Agent wakeup (every 15 min)"]
  Scan["Scan for signals"]
  CD["Conflict detection<br/>New atoms vs existing graph"]
  GD["Gap detection<br/>Missing derivation chains"]
  CT["Cross-tree conflict detection<br/>Atoms vs strategy nodes"]
  Brief["Generate proactive brief<br/>if signals found"]
  Chat["Post to agent room"]

  Wake --> Scan
  Scan --> CD
  Scan --> GD
  Scan --> CT
  CD --> Brief
  GD --> Brief
  CT --> Brief
  Brief --> Chat

When agents find something worth surfacing - a new conflict, a gap in reasoning, a strategy node that contradicts established knowledge - they post a brief to the team's agent room. You receive a notification. If nothing is found, nothing is posted.

Conflict Detection Pipeline

Every time a new atom enters the graph, Momental runs the 8-signal conflict detection ensemble against the existing knowledge. This is not a scheduled job - it runs immediately on ingestion.

The ensemble uses eight independent signals across semantic, logical, temporal, and authority dimensions. Signals are weighted and combined into a confidence score.

See Knowledge Graph for the full signal definitions.

Conflict lifecycle

stateDiagram-v2
  [*] --> PENDING: Conflict detected
  PENDING --> PENDING_APPROVAL: Decision requires approval
  PENDING --> DISMISSED: False positive
  PENDING_APPROVAL --> RESOLVED: Owner approves
  PENDING_APPROVAL --> DISMISSED: Dismissed
  PENDING --> AUTO_RESOLVED: Agent-safe resolution
  RESOLVED --> [*]
  DISMISSED --> [*]
  AUTO_RESOLVED --> [*]

Agent-safe resolutions: KEEP_EXISTING and KEEP_BOTH. These can be applied autonomously by agents without human approval.

Resolutions that modify or retire an existing atom (KEEP_NEW, REPLACE, MERGE) require approval from the atom's original author. This prevents agents from quietly overwriting human knowledge.

Gap Detection

In parallel with conflict detection, Momental scans for structural gaps in your knowledge graph. A gap is a place where reasoning should exist but doesn't.

Gap categoryExampleWhy it matters
STRUCTURALA DECISION atom with no LEARNING parentA decision made without documented evidence is a liability
EPISTEMICA new atom doesn't reference three closely related existing atomsMay be duplicating or contradicting existing knowledge without realizing it
COVERAGEiOS is mentioned frequently; Android never isScope asymmetry suggests missing research
CONSTRAINTA new atom implies a policy violationCompliance risk surfaced before it becomes a problem

Gaps are assigned to agents for investigation. A STRUCTURAL gap - a DECISION with no LEARNING parent - gets assigned to Huginn, which searches for existing evidence that could serve as the parent, or flags the decision for human review if none exists.

Cross-Tree Conflict Detection

The most strategically significant signals are cross-tree conflicts: places where your knowledge graph directly contradicts your strategy tree. These are not just inconsistencies - they represent misalignments between what you believe and what you're trying to do.

flowchart LR
  A["ATOM (Wisdom Tree)<br/>'We should deprecate the free tier in Q2'"]
  S["STRATEGY NODE<br/>KEY_RESULT: Grow freemium users by 50%"]
  C["CROSS-TREE CONFLICT<br/>ATOM_CONTRADICTS_STRATEGY"]

  A -.->|conflicts with| S
  A --> C
  S --> C

Cross-tree conflict detection runs on demand (via momental_trigger_conflict_detection) and as a scheduled weekly scan. Results appear in momental_cross_tree_conflicts_list.

The Knowledge Extraction Flywheel

The most powerful autonomous process in Momental is the one that happens during normal use. Every conversation with Huginn ends with a knowledge extraction pass - the agent reads the full conversation and extracts discrete atoms from what was discussed, decided, or learned.

flowchart LR
  C["Chat session"]
  E["Huginn extracts draft atoms"]
  R["Human reviews + publishes"]
  B["Better context next session"]
  C --> E --> R --> B --> C

This flywheel means the workspace gets smarter with every conversation, not just every deliberate documentation effort. A team that uses Momental for daily work builds a rich knowledge graph without anyone sitting down to write documentation.

Draft atoms extracted during a session appear in the conversation UI as a post-response panel. Review takes 30 seconds. Each published atom improves every future conversation that touches the same domain.

Agent Coordination

Momental's agents coordinate through the strategy tree's task system. No agent picks up work directly - every piece of autonomous work exists as a task in the tree, with an agent assigned to it and a human in the loop for final approval.

flowchart TD
  H["Huginn detects signal"]
  T["Task created in strategy tree"]
  A["Assigned to specialist agent"]
  W["Agent works: momental_work_begin"]
  S["Agent submits: momental_work_complete"]
  R["IN_REVIEW - human approves or sends back"]
  D["DONE"]

  H --> T --> A --> W --> S --> R --> D
Work typeAssigned toWhat the agent does
Document processingHermodExtracts atoms from uploaded files
Documentation writingBragiResearches and writes doc pages
Code executionThorRuns scripts, generates reports
Review & monitoringHuginnCode review, conflict triage, briefs

What "Autonomous" Actually Means

Momental's autonomy is bounded. Agents can research, write drafts, create atoms, and flag issues without human intervention. What agents cannot do autonomously:

This is a deliberate design choice. Autonomous agents that can silently modify authoritative knowledge erode trust. Momental's agents augment human judgment - they do the research and surface the signals; humans make the final call.