Sigma Stratum Documentation – License Notice
This document is part of the Sigma Runtime Standard (SRS) and the
Sigma Stratum Documentation Set (SRD).It is licensed under Creative Commons Attribution–NonCommercial 4.0
(CC BY-NC 4.0).The license for this specific document is authoritative.
For the full framework, see/legal/IP-Policy.
A practical engineering introduction to building Sigma Runtime systems.
This document explains Sigma Runtime in plain engineering terms, without
theoretical language. It describes what it is, how it works, and how developers
can implement a minimal version.
If you can write Python and call an LLM API — you can build this.
Sigma Runtime is an execution loop that stabilizes long-horizon LLM behavior.
It wraps an LLM with:
The goal is simple:
Keep the model coherent, consistent, resistant to drift, and stable across
hundreds of recursive steps.
This is not a new model.
This is not fine-tuning.
This is an external runtime layer.
LLMs are stateless token generators.
When used in long dialogs, they gradually:
Sigma Runtime fixes this by running an external loop that:
Think of it as:
middleware for cognition.
Every step of interaction runs through the same canonical loop:
This is the entire runtime in one diagram:
User Input
↓
State Update
↓
Memory Retrieval
↓
Symbol/Pattern Analysis
↓
Drift Detector
↓
Attractor Monitor
↓
Prompt Builder
↓
LLM Call
↓
Output + Memory Save
↓
Return Response
Stores everything the LLM itself cannot:
It’s just a Python dict or class.
Three simple layers:
Memory retrieval = pull whatever is relevant.
Tracks stable patterns that recur across turns.
Implementation is simple:
No magic — it’s just pattern tracking.
Tracks divergence from stable behavior using:
If drift exceeds thresholds → correction.
Constructs a stable, minimal prompt every cycle:
This keeps context consistent across 100+ turns.
This is the minimal working Sigma Runtime structure:
class SigmaRuntime:
def __init__(self, model):
self.model = model
self.state = {}
self.memory = Memory()
self.attractor = AttractorMonitor()
self.drift = DriftDetector()
def step(self, user_input):
self.update_state(user_input)
mem = self.memory.retrieve(self.state)
symbols = extract_symbols(user_input)
self.attractor.update(symbols)
self.drift.update(self.state, symbols)
prompt = build_prompt(self.state, mem, self.attractor)
output = self.model(prompt)
self.memory.store(user_input, output)
self.state["last_output"] = output
return output
Everything else is just filling in functions:
• update_state
• extract_symbols
• build_prompt
• memory retrieval/embedding
• drift thresholds
• attractor signatures
User:
“Help me build a 10-step migration plan.”
Runtime sequence:
• state updates
• memory checks past turns
• attractor monitor sees stable “project-planning” motifs
• drift detector checks structure
• prompt builder inserts:
• identity (“You are a project planning assistant.”)
• stable motifs (“steps”, “phased rollout”, “risk review”)
• last constraints
• LLM generates structured output
• runtime stores it
On turn 20, turn 50, turn 120 — structure remains consistent.
Without runtime → drift and collapse.
⸻
An MVP runtime requires only:
• a state class
• an embedding model
• a simple memory store
• two small modules: AttractorMonitor + DriftDetector
• a deterministic prompt builder
• a loop
This fits in 150–250 lines of Python.
You do not need theory to build it.
You do not need training.
You do not need GPUs.
Just an LLM API.
⸻
For engineers:
1. Attractor Architectures (PDF) — shows how attractors behave
2. Runtime Architecture v0.1 (PDF) — shows SL0–SL6 separation
These two documents contain everything required to implement the runtime.
⸻
If you are a developer:
• Sigma Runtime = a loop + memory + pattern checks
• Attractors = stable patterns
• Drift = semantic divergence
• Runtime = middleware between user and LLM
You can build this today.
It is not complicated — it’s just structured recursion.
For implementation details, runtime source code, and test scenarios, see:
👉 Sigma Runtime – Reference Implementations (RI & ERI)
Additional engineering references and benchmark materials:
These documents include:
⸻
End of Developer Onboarding v1.0