The 4-Phase Pipeline
NormCode progressively formalizes intent into structure. Each phase answers a specific question while preserving semantic intent.
What Each Phase Does
| Phase | Question Answered | Input | Output |
|---|---|---|---|
| Instruction | What and why? | User intent | Natural language |
| Derivation | In what order? | Natural language | .ncds |
| Formalization | Grammar consistency check | .ncds |
.ncd |
| Post-Formalization | Contextualization + resource demand | .ncd |
.ncd (enriched) |
| Activation | Actual resources + executables | .ncd (enriched) |
JSON repositories |
Phase 1: Derivation
Determine execution order by transforming the instruction into a hierarchical inference structure.
What It Does
- Identifies concepts (data entities)
- Identifies operations (actions to perform)
- Extracts dependencies (which concepts feed into which operations)
- Creates hierarchical tree structure
Example
Input (Natural Language)
"Summarize this document by first cleaning it and then extracting key points."
Output (.ncds)
<- document summary
<= summarize this text
<- clean text
<= extract main content, removing headers
<- raw document
Key features: Bottom-up reading, simple markers (<-, <=), natural language descriptions. No flow indices or sequence types yet.
Phase 2: Formalization
Grammar consistency check—add structural rigor to ensure the plan conforms to NormCode grammar.
What It Adds
- Unique flow indices (e.g.,
1.2.3) to every step - Sequence types for each inference (imperative, grouping, timing, looping)
- Value bindings (
<:{1}>,<:{2}>) for input ordering - Metadata comments (
?{sequence}:,?{flow_index}:)
Example
Output (.ncd)
::{document summary} | ?{flow_index}: 1
<= ::(summarize this text) | ?{flow_index}: 1.1 | ?{sequence}: imperative
<- {clean text} | ?{flow_index}: 1.2
<= ::(extract main content) | ?{flow_index}: 1.2.1 | ?{sequence}: imperative
<- {raw document} | ?{flow_index}: 1.2.1.1
Key changes: Every line has a flow index. Functional concepts have sequence types. Root concept marked with :<:. Semantic types added ({} for objects, ::() for imperatives).
Phase 3: Post-Formalization
Contextualization and resource demand—declare what resources are needed and where to find them.
Three Sub-Phases
Re-composition
Maps abstract intent to normative context: paradigms, body faculties, perception norms.
Provision
Links to concrete resources: file paths, prompt templates, script locations.
Syntax Re-confirmation
Ensures tensor coherence: axes, shape, element types for reference structure.
Annotations Added
| Annotation | Purpose | Example |
|---|---|---|
%{norm_input}: |
Paradigm ID to load | h_PromptTemplate-c_Generate-o_Text |
%{body_faculty}: |
Body faculty to invoke | llm, file_system |
%{ref_axes}: |
Named axes for tensors | [signal], [_none_axis] |
%{file_location}: |
Path to ground data | provision/data/input.json |
Phase 4: Activation
Resolve actual resources and executables—validate, load, and include concrete resources in JSON repositories.
What It Produces
concept_repo.json
All concept definitions with types, ground flags, axes, and initial data.
inference_repo.json
All inference definitions with working_interpretation for each sequence.
Sample Output Structure
concept_repo.json
[
{
"concept_name": "{document summary}",
"type": "{}",
"is_ground_concept": false,
"is_final_concept": true,
"reference_axis_names": ["_none_axis"]
}
]
inference_repo.json
[
{
"flow_info": {"flow_index": "1.1"},
"inference_sequence": "imperative",
"concept_to_infer": "{document summary}",
"function_concept": ::(summarize this text)",
"value_concepts": ["{clean text}"],
"working_interpretation": {
"paradigm": "h_PromptTemplate-c_Generate-o_Text",
"value_order": {"{clean text}": 1}
}
}
]
Key insight: The working_interpretation dict contains exactly what each sequence's IWI (Input Working Interpretation) step needs to execute.
Format Ecosystem
Different representations for different purposes throughout the pipeline.
| Format | Purpose | Created By |
|---|---|---|
.ncds |
Draft (easiest to write) | You or LLM |
.ncd |
Formal syntax | Compiler |
.ncn |
Natural language companion | Compiler |
.ncdn |
Hybrid (NCD + NCN together) | Editor tools |
.concept.json |
Concept repository | Activation |
.inference.json |
Inference repository | Activation |
Key Compilation Concepts
Flow Indices
Hierarchical addresses for every step in the plan. Format: 1.2.3 where each number is a level in the tree.
1 # Root concept
├── 1.1 # Functional concept (operation)
├── 1.2 # Value concept (first input)
│ ├── 1.2.1 # Nested functional concept
│ ├── 1.2.1.1 # Nested value concept
├── 1.3 # Value concept (second input)
Sequence Types
Categories of operations that determine execution strategy:
| Category | Types | LLM? |
|---|---|---|
| Semantic | imperative, judgement |
✅ Yes |
| Syntactic | assigning, grouping, timing, looping |
❌ No (free) |
Compilation Guarantees
What Compilation Ensures
- Syntactic validity
- Flow consistency
- Sequence completeness
- Reference structure
- Working interpretation
What It Does NOT Ensure
- Semantic correctness
- Runtime success
- Logical soundness
- Optimal performance
Key insight: Compilation validates structure, not intent. The plan might be syntactically perfect but semantically wrong.