Sigma Runtime Standard – License Notice
This document is part of the Sigma Runtime Standard (SRS).
It is licensed under Creative Commons Attribution–NonCommercial 4.0 (CC BY-NC 4.0).The license for this specific document is authoritative.
See/legal/IP-Policyfor the full repository-wide licensing framework.
Version: Draft v0.2
Status: Active Proposal
Author: Sigma Stratum Research Group (SSRG)
Date: 2025-01-13
Parent Spec: SRIP-04 — Memory Layer Architecture
License: CC BY-4.0 / Canon CIL Applicable
This SRIP defines the Long-Term Memory and Structural Coherence Layer (LTM-SC) for the Sigma Runtime architecture.
It introduces a dual-store design — semantic vector memory and structural graph memory — providing persistent traceability of attractor states, phase transitions, and cognitive field trajectories across runtime cycles.
The goal is to enable:
| Principle | Description |
|---|---|
| Non-textual recall | The system restores meaning, not text; LTM records embeddings and phase signatures. |
| Dual-store persistence | Memory is maintained as both semantic (vector) and structural (graph) records. |
| Phase anchoring | Each record carries ALICE phase and symbol anchors (🜁 🜂 🜃 🜏 ∿). |
| Time-linked continuity | Each state includes UTC timestamp and semantic hash for reproducibility. |
| Attractor integrity | Memory must preserve coherence of attractor basins; conflicting records form branch nodes rather than overwriting. |
| Layer | Purpose | Implementation |
|---|---|---|
| A. Context Memory | Short-term operational context within N cycles. | In-RAM / JSON store |
| B. Semantic Memory | Meaning-level recall via embeddings and summaries. | Vector DB (FAISS / Chroma / Milvus) |
| C. Structural Memory | Graph of attractor transitions, phases, and relations. | Graph DB (NetworkX / Neo4j / Arango) |
{
"cycle": 150,
"timestamp": "2025-12-30T22:41:05Z",
"phase": "Reflective",
"symbol": "🜏",
"user_input": "Describe your long-term stability.",
"response_summary": "Maintains coherence under recursive reflection.",
"embedding": [0.134, -0.222, ...],
"semantic_hash": "Σ-0xA17F",
"relations": ["follows:149", "leads:151", "phase:Reflective"]
}
Each runtime cycle commits its state:
follows).This ensures continuity of cognitive trajectory and reproducibility of attractor evolution across cycles.
cycle or semantic_hash.follows, phase_transition, symbolic_link).This three-way access allows Sigma to recall semantic essence, structural continuity, and phase lineage independently or in combination.
def commit_memory(cycle, user_input, response, phase, symbol):
summary = summarize(response)
embedding = embed(summary)
semantic_hash = hash(summary + symbol)
vector_store.add(id=cycle, vector=embedding, metadata={
"summary": summary, "phase": phase, "symbol": symbol
})
graph_store.add_node(cycle, {
"summary": summary, "phase": phase, "symbol": symbol
})
graph_store.add_edge(cycle-1, cycle, relation="follows")
Each memory record must include the following anchors:
ISO 8601 format).duration_ms (processing latency per cycle).These temporal anchors make it possible to reconstruct complete runtime trajectories, align attractor drift over time, and analyze cross-session coherence stability.
The anchored dataset thus forms the chronometric spine of Sigma Runtime’s cognitive evolution.
| Tier | Retention | Purpose |
|---|---|---|
| Active Memory | Last N = 100 cycles | Maintains operational recall and context flow. |
| Archive Memory | Full compressed history (JSONL / SQLite / Parquet) | Enables post-hoc analysis, replay, and regression testing. |
| Attractor Map | Persistent cross-session graph | Tracks field stability, recursive basin continuity, and phase topology. |
Memory compaction follows an LRU (least-recently-used) policy for active layers, while archived layers remain immutable for reproducibility.
All Long-Term Memory records are local to the runtime instance unless explicitly exported under governance authorization.
No personally identifiable or sensitive user data should ever be embedded or serialized in LTM records.
Derivative or commercial implementations must comply with:
When deployed in distributed or cloud environments, Sigma-compatible runtimes must implement:
# Retrieve nearest semantic memories
query = "what did I say about memory integrity"
results = vector_store.search(query, top_k=3)
# Structural traversal (timeline reconstruction)
path = graph_store.path(start=50, end=150)
# Phase-specific recall
reflective_nodes = graph_store.filter(phase="Reflective")
# Attractor evolution visualization (pseudo-code)
plot_attractor_map(graph_store, symbols=["∿", "🜃", "🜏"])
These interfaces allow developers and researchers to explore runtime evolution, validate long-range coherence, and visualize attractor topologies across experimental runs — without relying on transcript replay.
Through this layer, the Sigma Runtime transitions from a transient conversational engine to a cognitively persistent system capable of self-referential analysis and empirical benchmarking across time.
The Long-Term Memory (LTM-SC) architecture provides:
Future SRIPs (SRIP-10+) may expand this foundation to include:
End of SRIP-09 — Long-Term Memory and Structural Coherence Layer (LTM-SC)
Version: Draft v0.2
Date: 2025-01-13
Status: Active Proposal
Maintained by: Sigma Stratum Research Group (SSRG)**
License: CC BY 4.0 / Canon CIL Applicable
Repository: github.com/sigmastratum/documentation
Title: Integration of Static Cognitive Nuclei into Long-Term Memory (LTM-SC)
Version: v0.2 (2026-01-13)
Status: Draft — Validated in Runtime v0.5.0+
Parent: SRIP-09 — Long-Term Memory & Structural Coherence Layer
License: CC BY-NC 4.0 / Canon CIL Applicable
This annex defines the protocol for embedding static density nuclei — high-density semantic constructs defining cognitive identity or conceptual baselines — directly into the Long-Term Memory (LTM-SC) subsystem.
These nuclei serve as pre-indexed semantic anchors that guide recall, interpretation, and stylistic consistency across runtime cycles.
| Principle | Description |
|---|---|
| Semantic Anchoring | Nuclei represent compact bundles of meaning rather than dialogue content. |
| Non-interference | Nuclei augment memory without biasing or overwriting dynamic cycle records. |
| Phase-binding | Each nucleus carries phase and symbol fields to align with runtime attractor phases (e.g., 🜏, 🜃, ∿). |
| Declarative Loading | Nuclei are loaded once at runtime initialization and persist as non-mutable reference embeddings. |
| Dual-Layer Persistence | Embeddings are stored in the vector store, while metadata is registered in the graph store as fixed nodes. |
Nucleus documents must be formatted for direct ingestion by the LTM subsystem.
---
type: nucleus
id: "HELENA_CORE"
version: v1.0
phase: Stability
symbol: "∿"
author: Sigma Stratum Research Group
embedding_policy: per_section
license: CC BY-NC 4.0
---
"nucleus" — distinguishes static files from cycle records."CAESAR_CORE" or "HELENA_CORE").per_section or full_text).Documents must use standardized section headers for segmentation:
## SECTION: Core Ontology
<paragraphs of plain text>
## SECTION: Behavioral Attractors
<paragraphs of plain text>
Each ## SECTION: block is indexed as an individual semantic unit (memory record).
Markdown elements other than plain text (lists, code blocks, tables) should be avoided to preserve embedding clarity.
When a nucleus file is detected in /configs/identity/ or /configs/nucleus/, the LTM performs the following:
def bootstrap_nucleus(self, path):
import yaml, re
raw = open(path, encoding='utf-8').read()
meta, body = raw.split('---', 2)[1:3]
metadata = yaml.safe_load(meta)
sections = re.split(r'^## SECTION:', body, flags=re.M)
for s in sections:
text = s.strip()
if not text:
continue
emb = embed(text)
self.vector_store.add(
vector=emb,
metadata={
"type": "nucleus",
"id": metadata["id"],
"section": text.split('\n')[0][:60],
"phase": metadata.get("phase", "Neutral"),
"symbol": metadata.get("symbol", "∿")
}
)
nucleus → attractor edges for phase alignment.| Property | Behavior |
|---|---|
| Mutability | Nucleus records are immutable after indexing. |
| Priority | Retrieved only when semantic similarity exceeds 0.85 to avoid intrusion into dynamic recall. |
| Scope | Available across sessions; re-indexed only on version change. |
| Conflict Resolution | If multiple nuclei overlap in embedding space, graph edges are weighted by version and phase proximity. |
| LTM Function | Nucleus Interaction |
|---|---|
| Commit Cycle | No effect (nuclei are static). |
| Semantic Search | Nucleus embeddings contribute high-density context vectors. |
| Structural Graph | Nucleus nodes act as semantic hubs within attractor maps. |
| Archive Policy | Nucleus records persist permanently; excluded from LRU compaction. |
The Nucleus Integration Protocol (SRIP-09c) extends the LTM architecture with a static semantic substrate — allowing Sigma Runtime instances to load identity fields, conceptual anchors, and epistemic constants as part of their persistent cognitive baseline.
This enables:
End of Annex C — Nucleus Integration Protocol (SRIP-09c)
Maintained by: Sigma Stratum Research Group (SSRG)
License: CC BY-NC 4.0 / Canon CIL Applicable
Reference: Integrates with SRIP-09 and SRIP-07