Knowledge Graph
The Wisdom Tree in depth - atoms, bonds, voice metadata, derivation chains, and the 8-signal conflict detection system that keeps knowledge accurate automatically.
What Is an Atom?
An atom is the smallest unit of knowledge that can stand alone. Not a paragraph, not a document - a single, falsifiable claim. The discipline of atomicity is what makes knowledge searchable, linkable, and worth maintaining.
Good atoms:
- Make exactly one claim: "API p99 latency is 180ms at 1,000 RPS"
- Are self-contained - readable without surrounding context
- Can be confirmed or refuted by evidence
- Have the right type (DATA, LEARNING, DECISION, or PRINCIPLE)
Bad atoms:
- Make multiple claims in one statement
- Contain vague language ("sometimes", "might", "could be better")
- Duplicate another atom already in the graph
The Four Atom Types
Each type encodes the atom's epistemic role - how certain it is and how it was derived. The types form a derivation chain from raw observation to distilled wisdom:
flowchart LR D["DATA<br/>Raw measurement<br/>or observation"] L["LEARNING<br/>Pattern synthesized<br/>from multiple data points"] DE["DECISION<br/>Committed choice<br/>with rationale"] P["PRINCIPLE<br/>Guiding rule derived<br/>from repeated decisions"] D -->|DERIVES_FROM| L -->|DERIVES_FROM| DE -->|DERIVES_FROM| P
| Type | What it captures | Example |
|---|---|---|
| DATA | Measurements, observations, raw facts | "NPS dropped 12 points in Q3" |
| LEARNING | Insights synthesized from multiple data points | "Enterprise users churn when onboarding exceeds 2 weeks" |
| DECISION | Committed choices with documented rationale | "We will sunset the free tier in Q2 - unit economics don't support it at scale" |
| PRINCIPLE | Guiding beliefs that apply to future choices | "Always optimize for time-to-value over feature breadth" |
Voice Metadata
Every atom carries voice metadata - who said it, from whose perspective, and how authoritative that voice is. This is one of Momental's most powerful and least obvious features.
The voice source records whose perspective the atom represents:
| Voice Type | Meaning | Example |
|---|---|---|
DECIDED | Committed, settled - no longer open for debate | "We're using Stripe for payments" |
PROPOSED | Suggested, not yet committed | "We should consider usage-based pricing" |
KNOWN | Established external fact | "AWS has 3 availability zones in us-east-1" |
BELIEVED | Conclusion drawn with reasoning | "Freemium will outperform sales-led for SMB" |
ASSUMED | Taken as given without verification | "Users prefer dashboards over reports" |
OBSERVED | Seen or measured directly | "Page load time is 3.2s on mobile" |
RECEIVED | Input from an external source | "Customer X needs SSO by March" |
The distinction between DECIDED and BELIEVED is particularly important.
DECIDED atoms suppress debate - querying for them returns authoritative direction.
BELIEVED atoms invite scrutiny - they should be challenged when new evidence arrives.
Voice metadata also captures the source perspective - is this your team's view, a customer's quote, a competitor's claim, or market intelligence? This lets you ask: "What are our customers saying about pricing?" and get only externally-sourced atoms.
Bond Types
Bonds are the edges of the Wisdom Tree. Unlike the strict parent-child hierarchy of the Strategy Tree, atoms can have multiple bonds of different types - creating a genuine graph structure within the Wisdom Tree.
| Bond Type | Meaning | When to use |
|---|---|---|
DERIVES_FROM | Provenance - this knowledge came from that knowledge | Standard derivation chain links (DATA→LEARNING→DECISION→PRINCIPLE) |
SUPPORTS | Corroborating evidence | A second data point that confirms an existing learning |
CONTRADICTS | Direct contradiction | Two atoms that cannot both be true - triggers conflict detection |
SUPERSEDES | Replacement - the older atom is no longer current | A decision has been reversed; a measurement has been updated |
COMPLEMENTARY | Related insights that work together | Two learnings from different domains that reinforce each other |
The Derivation Chain
The derivation chain is not just a convention - it's structurally enforced. When Momental detects a DECISION atom with no parent LEARNING, it raises a structural gap. A decision made without evidence is a liability.
flowchart TD D1["DATA: Checkout abandonment 67%"] D2["DATA: 84% of abandoners cited surprise shipping cost"] L["LEARNING: Users abandon when shipping costs appear late"] DE["DECISION: Show estimated shipping cost before payment step"] P["PRINCIPLE: No surprise costs - ever"] D1 -->|SUPPORTS| L D2 -->|DERIVES_FROM| L L -->|DERIVES_FROM| DE DE -->|DERIVES_FROM| P
Every node in this chain is independently searchable. An agent working on the checkout flow can search for DECISIONS about checkout and find this, along with the evidence trail that justifies it - without reading any documents.
Confidence Decay
Atoms are not eternal. Confidence in a claim decays over time relative to when it was authored and last validated. An observation from three years ago ranks lower in retrieval than one from last month, even if the semantic match is identical.
Decay rate depends on atom type: DATA atoms decay fastest (measurements go stale quickly). PRINCIPLE atoms decay slowest (guiding beliefs are more stable). DECISION atoms decay at a medium rate and get flagged for review when they are significantly older than the learnings they were based on.
Conflict Detection: The 8-Signal Ensemble
Every time a new atom is added, Momental runs an 8-signal ensemble against the existing graph to detect contradictions. This is fully automatic - you do not need to trigger it.
flowchart LR New["New atom arrives"] E["8-signal<br/>ensemble runs"] Clear["CLEAR conflict<br/>High confidence<br/>Auto-flagged"] Ambig["AMBIGUOUS<br/>Medium confidence<br/>LLM analysis"] None["UNRELATED<br/>Low confidence<br/>No conflict"] New --> E E --> Clear E --> Ambig E --> None
The ensemble covers four categories of contradiction: semantic (atoms that say similar things with opposing meaning), logical (structurally implied contradictions and implication chains), temporal (overlapping time scopes with incompatible claims), and authority & scope (conflicting claims across sources, perspectives, or product/geographic scope).
Signals are weighted and combined into a confidence score. High-confidence conflicts are automatically flagged and routed to the team. Medium-confidence cases are analyzed by an LLM before surfacing. Low-confidence pairs are treated as unrelated.
To manage conflicts, see Conflicts & Gaps. To understand how Momental monitors for these signals continuously, see Autonomy & agents.
Gap Detection
In parallel with conflict detection, Momental scans for missing knowledge. Four categories of gaps:
| Category | What it catches |
|---|---|
STRUCTURAL | Broken derivation chains - a DECISION with no parent LEARNING |
EPISTEMIC | Existing relevant atoms that are not referenced by new knowledge |
COVERAGE | Scope gaps - iOS mentioned, but no Android equivalent |
CONSTRAINT | Policy or compliance violations implied by the new atom |
Detected gaps appear in momental_gaps_list and can be assigned to agents to fill.
The automated gap scan runs weekly; you can trigger it on demand with
momental_trigger_conflict_detection.