Getting Started with Context

Context is what makes Huginn useful. An empty workspace gets generic answers. A rich workspace gets answers that know your goals, your decisions, and your team.

What "Context" Means in Momental

When Huginn receives a message, it loads a context package before calling the AI model. This package contains your team's mission and vision, active OKRs, the most relevant knowledge atoms for the question, and what Huginn already knows about you personally.

This context is what separates "tell me about pricing strategy" from "tell me about our pricing strategy" - the difference between a generic LLM response and an answer that knows your specific decisions, customers, and constraints.

flowchart LR
  Q["Your question"]
  subgraph C["Context loaded per message"]
    M["Team mission + vision"]
    O["Active OKRs"]
    A["Relevant atoms (RRF search)"]
    UM["Your user memory"]
  end
  Q --> C --> H["Huginn generates<br/>grounded answer"]

The more you put into Momental, the better every answer gets. This is the compounding flywheel: good context → better answers → decisions documented as atoms → better context next time.

Step 1: Set Your Mission and Vision

Before creating any atoms, anchor your workspace with a mission and vision. These load into every Huginn conversation, providing the strategic frame that makes all knowledge meaningful.

In the workspace UI: go to Strategy → New Node → VISION. Write your company's long-term aspirational state in one sentence. Then add a MISSION node as its child.

Via MCP:

import { momental_strategy_create } from '@momental/mcp';

const vision = await momental_strategy_create({
  nodeType: "VISION",
  statement: "Every company knows what it believes"
});

const mission = await momental_strategy_create({
  nodeType: "MISSION",
  statement: "Build the alignment platform for product teams",
  parentId: vision.id
});

Don't over-engineer this. A mission and vision that are honest about where you are right now are more useful than aspirational ones that don't reflect reality.

Step 2: Create Your First Atoms

Start with decisions. The most valuable knowledge to capture is the reasoning behind choices your team has already made. Documentation explains what you built; atoms explain why.

Three atoms worth creating in your first session:

  1. A DECISION you made in the last month, with its rationale
  2. A LEARNING from a recent experiment or customer conversation
  3. A PRINCIPLE your team keeps returning to when making trade-offs
await momental_node_create({
  statement: "We chose Postgres over DynamoDB because queries are relational and we need consistent reads",
  nodeType: "DECISION",
  status: "ACTIVE",
  domain: "engineering",
  sourceQuote: "Evaluated both in Q4. DynamoDB's eventual consistency and awkward query model would have required significant application-layer compensation logic.",
  tags: ["database", "architecture", "infrastructure"]
});

Step 3: Link Atoms to Your Strategy

Atoms become much more powerful when they are linked to the strategy nodes they inform. A DECISION about pricing is more useful when it's connected to the KEY_RESULT about revenue.

Momental auto-links atoms to strategy nodes via semantic similarity - when you create an atom, it scans your strategy tree and suggests the most relevant node to link it to. You can also link explicitly via the treeNodeId parameter or via momental_node_link.

In the workspace UI: when viewing an atom, use the Link to Strategy button in the detail panel. The search will surface relevant nodes.

Step 4: Upload a Document

If you have existing knowledge in documents - a design spec, a retrospective, meeting notes - upload them. Hermod will process the document and extract discrete atoms automatically. A 10-page design document typically yields 15–30 atoms in 1–2 minutes.

In the workspace UI: use Knowledge → Upload Document. Supported formats: Markdown, PDF (text-based), plain text, and Word.

Via MCP:

const doc = await momental_document_add({
  title: "Q1 Architecture Review",
  content: documentText,
  domain: "engineering"
});

// Extraction takes 1–2 minutes
// Atoms are DRAFT until published
const status = await momental_document_status({ documentId: doc.id });
if (status.status === "COMPLETE") {
  await momental_document_publish({ documentId: doc.id });
}

Uploaded documents feed Hermod, Momental's knowledge ingestion agent. See Hermod's documentation for supported file types and extraction details.

Step 5: Ask Huginn a Question

Open the workspace chat and ask something specific to your domain. The quality of the answer tells you how much useful context you've built.

Good first questions:

Huginn will cite the specific atoms it retrieved to answer. If it can't find relevant atoms, that itself is useful feedback - you know what to add next.

What Huginn Loads on Every Message

Understanding what's in the context package helps you know what to populate. On every message, Huginn loads:

ComponentWhat it containsHow to improve it
Team contextMission, vision, active objectivesAdd a mission + vision node; populate your OKRs
User identityYour name, role, and levelUpdate your profile in workspace settings
User memory (IDENTITY tier)Permanent facts Huginn has saved about youBuilds automatically as you chat
User memory (FOCUS tier)What you're currently working onTell Huginn your current priorities; it saves them
Relevant atomsTop 6–10 atoms matching your query via RRF fusion searchCreate more atoms in the relevant domain

The RRF fusion search - which combines semantic similarity, keyword matching, entity extraction, and graph traversal - is described in detail in How Huginn Works.

The Flywheel

Context compounds. Each Huginn conversation extracts draft atoms from what was discussed. When published, those atoms improve the context for future conversations. The team that documents aggressively gets measurably better answers than the one that doesn't.

You don't have to manage this manually. After each conversation, Huginn presents extracted draft atoms in the UI. Review and publish the ones that are accurate. It takes 30 seconds and improves every future answer that touches the same domain.