The Compilation Pipeline

NormCode plans go through a compilation process from intent to execution:

Instruction (natural language) or .ncds (draft)
      ↓
Derivation → .ncds
      ↓
Formalization (Grammar check → .ncd)
      ↓
Post-Formalization (Context + resource demand)
      ↓
Activation (Resolve resources → JSON repos)
      ↓
  Orchestrator Executes

For most users: You can start with natural language instruction or write .ncds directly. The compiler produces .concept.json + .inference.json for execution.

Inside-Out, Top-to-Bottom Execution

NormCode executes inside-out (innermost first) and top-to-bottom (sibling order): nested inferences complete before their parents, and siblings execute in declaration order.

/: Step 3 - Runs last (needs both inputs)
<- result
    <= calculate
    /: Step 1a - Can run immediately
    <- input A
        <= process A
        <- raw A
    /: Step 1b - Can run in parallel with 1a
    <- input B
        <= process B
        <- raw B

Execution Order

  1. Inferences for input A and input B are ready immediately (only need raw inputs)
  2. They can run in parallel since they don't depend on each other
  3. Once both complete → the calculate inference can run
  4. Result is produced

Readiness Criteria

An inference becomes ready to execute when:

The Two Types of Execution

Semantic Sequences

Create information through reasoning, generation, or evaluation. May use LLM, but can also be optimized to scripts.

Sequence LLM? Cost Examples
Imperative Maybe Tokens or Free Extract, generate, transform, analyze
Judgement Maybe Tokens or Free Evaluate, validate, decide, check

Syntactic Sequences (Data Manipulation)

Reshape information through deterministic operations. No LLM involved.

Sequence LLM? Cost Examples
Assigning Free Select, accumulate, pick first valid
Grouping Free Collect, combine, bundle items
Timing Free Branch, wait, depend on condition
Looping Free* Iterate, repeat for each item

* The loop structure is free; semantic operations inside the loop cost tokens.

The Orchestrator

The Orchestrator runs in cycles, managing execution flow:

FOR EACH CYCLE:
  1. CHECK   → Scan waitlist for ready inferences
  2. EXECUTE → Run ready inferences (via AgentFrames)
  3. UPDATE  → Mark completed, store results
  4. REPEAT  → Until all inferences complete

State Management

Component Purpose
Waitlist Static list of all inferences (by flow_index)
Blackboard Dynamic status tracker (pending/in_progress/completed/skipped)
ConceptRepo Stores data references for all concepts
InferenceRepo Stores inference definitions and sequences

Checkpointing & Resuming

The Orchestrator saves complete state to SQLite, enabling powerful workflow control:

⏸️

Pause & Resume

Stop execution at any cycle and continue later from exactly where you left off.

🔀

Fork & Branch

Create a new run from any checkpoint to experiment with different approaches.

🔄

Smart Patching

Re-run only changed logic while keeping valid cached results.

What Gets Saved

Flow Index System

Every node has a unique flow index that identifies its position in the execution DAG:

1           # Root concept (output)
├── 1.1      # Function concept
├── 1.2      # First value input
├── 1.3      # Second value input
│   ├── 1.3.1  # Sub-inference function
│   ├── 1.3.2  # Sub-inference value

Flow indices are used for:

Visual Debugging with Canvas App

The Canvas App provides a visual, interactive environment for executing and debugging NormCode plans:

📊

Visualize

See the entire inference graph before execution.

👁️

Watch

Monitor execution progress in real-time.

🐛

Debug

Set breakpoints and step through execution.

🔍

Inspect

View tensor data at any node in the graph.

Debugging Patterns

Inference not running?

Wrong result?

Loop not terminating?

The Execution Guarantee

NormCode's core promise: Every inference sees exactly—and only—what you explicitly declare. No hidden context, no state bleeding, full auditability.

Mechanism Enforcement
Inside-out execution Can't run until inputs ready
Reference isolation Each concept has its own Reference
Explicit retrieval Only fetches declared inputs
No global state No hidden context bleeding
Back to Documentation