Substrate-Independent Invariants of Cognition
- Substrate-independent invariants are defined as enduring formal properties that persist across varying physical substrates, enabling precise cognitive comparisons.
- They integrate information-theoretic, dynamical, and computational measures—such as mutual information, homology classes, and stack-nesting metrics—to quantify cognitive structures.
- These invariants underpin the design of adaptive cognitive architectures by unifying analyses across biological, artificial, and hybrid human–machine systems.
A substrate-independent invariant of cognition is a formal, information-theoretic or structural property of an agent–environment system that remains constant, or transforms in precisely defined ways, when cognition is instantiated in different material substrates—whether neural, biochemical, mechanical, or computational. These invariants provide a rigorous framework for comparing, engineering, and analyzing cognition in biological organisms, artificial agents, hybrid human–machine collectives, and basal living systems at every scale.
1. Formal Definitions and General Criteria
A substrate-independent invariant of cognition is defined as a relational property I of system trajectories, preserved under isomorphisms between organizational structures and dynamical laws of differing physical substrates. Let and be two systems with configuration spaces , dynamical transitions , and an isomorphism. The invariant is a function of trajectory probability distributions such that
where and denote the distributions over histories in and respectively (Dodig-Crnkovic, 2024).
These invariants arise in frameworks that treat cognition as emerging from the information-processing and dynamical organization of the system rather than its specific biophysical realization.
2. Taxonomy of Substrate-Independent Invariants
A. Information-Theoretic Invariants
- Mutual information between subsystems captures integration and coordination:
- Entropy and multi-information, e.g., system-level and total correlation
- Integrated information (as in IIT), quantifying irreducibility of whole-system informational states (Dodig-Crnkovic, 2024)
- Algorithmic complexity and probability, e.g., Kolmogorov complexity and Solomonoff–Levin algorithmic probability , which are invariant up to constants across universal Turing machines (Gauvrit et al., 2015)
B. Dynamical and Topological Invariants
- Memory-amortized cycles: Persistent homology classes in the topological manifold of memory cycles, capturing the structural reuse and cycle-consistency underlying cognitive inference (Li, 19 Aug 2025, Li, 3 Dec 2025)
- Attractor landscapes and multistability: Formal existence of multiple attractor states (fixed points or cycles) in the system’s dynamical evolution, independent of specific substrate (Vallverdu et al., 2017)
- Lattice or morphospace of contexts and behavioral patterns, representing emergent complexity as the context structure rather than as a single scalar (Hall, 2018)
- Causal topology: The abstract directed-graph structure of dependencies among system components, as formalized in causal topography (Harnad, 2012)
C. Computational and Automata-Theoretic Invariants
- Stack-nesting and automata hierarchy: The tuple , where is the number of independent stacks and is stack nesting depth, anchors cognitive computational power in the Chomsky hierarchy, invariant across physical realizations (Granger, 2020)
- Morphological computation metrics, e.g. the MC ratio:
compares passive body computation and explicit control, invariant under isomorphic morpho-dynamics (Dodig-Crnkovic, 2024)
D. Embedding and Navigation Invariants
- Remapping and navigation in embedding spaces: Cognition as the iterative error-minimizing traversal (via gradient or associative retrieval) in embedding or latent state spaces, formalized as:
where is an error or energy functional, and the remapping absorbs new information (Hartl et al., 20 Jan 2026)
E. Unified Capacity Metrics
- Cognitive capacity () as geometric mean: Defined via mutual information-based sensing (), processing (), and action () sub-capacities,
with measured from empirical trajectories, independent of substrate (Solé et al., 19 Jan 2026).
3. Core Examples Across Biological and Artificial Substrates
| System | Invariant | Reference |
|---|---|---|
| Bacterial colonies | for gene-expression | (Dodig-Crnkovic, 2024) |
| Neural circuits/robotics | MC ratio, -minimization, | (Dodig-Crnkovic, 2024, Li, 3 Dec 2025) |
| Slime mold (Physarum) | Multistability, memory kernels, | (Vallverdu et al., 2017) |
| Deep neural nets (LaMa) | Spatial integration, error structure | (Nelson et al., 2023) |
| Human language/recursion | Stack-nesting tuple | (Granger, 2020) |
| Memory-amortized cortex | Homology classes | (Li, 19 Aug 2025, Li, 3 Dec 2025) |
| AI transformer/diffusion | Embedding remap/error-minimization | (Hartl et al., 20 Jan 2026) |
These examples exemplify invariance for the cognitive phenotype across disparate substrates—e.g., spatial pattern interpolation in both human frontal cortex and convolutional nets; multistability and feedback-driven adaptation in both Physarum and swarms; or stable attractor cycles in both cortical circuitry and motor controller robots.
4. Mathematical Structures Underpinning Invariants
4.1 Info-Computational Formulations
- Morphological computation: for morpho-dynamical state
- Substrate-independence axiom: If via isomorphism , then cognitive trajectories are informationally equivalent (Dodig-Crnkovic, 2024)
- Algorithmic explanations: The structure of cognitive input–output can be completely described by and up to additive or multiplicative constants (Gauvrit et al., 2015)
4.2 Dynamical-Systems and Topological Invariants
- Chain complexes/homology: Cognitive state-space described as a simplicial complex , with homological invariants classifying recurrent structures (Li, 3 Dec 2025)
- Pattern-formation and attractor criteria: Symmetry-breaking instabilities and the existence of attractor basins are formalized using the Jacobian and diffusion matrix , e.g., for some (Vallverdu et al., 2017)
4.3 Behavioral and Information-Processing Metrics
- Geometric means over sensory, processing, action channels: as a function of mutual information flows (Solé et al., 19 Jan 2026)
- Difficulty and error-structure invariants: Item-difficulty curves and confusion matrices in cognitive task performance, invariant under substrate substitution (Nelson et al., 2023)
5. Maintenance and Generation of Invariants: Self-Organization and Autopoiesis
Cognitive invariants are dynamically maintained by processes of:
- Self-assembly, ensuring local rules yield globally coherent informational structures
- Self-organization, guiding the distributions toward high mutual information or low free energy regimes
- Autopoiesis, guaranteeing boundary conditions and energetic fluxes preserve the invariant computational structure over time (e.g., implies for large ) (Dodig-Crnkovic, 2024)
- Amortization over cycles: Memory-amortized inference frameworks (MAI) highlight how topological cycles in memory serve as reusable invariants for rapid, context-specific inference while minimizing computational and energetic cost (Li, 19 Aug 2025, Li, 3 Dec 2025)
6. Implications for Cognitive Architectures and Comparative Analysis
- Architectural design principles for AGI and bio-inspired AI follow directly from these invariants, prescribing:
- Isomorphic informational topology: Hardware instantiating the same connectivity and cycle structure as neural substrates
- Morphological offloading: Physical bodies exploiting passive dynamics to minimize central control burden
- Adaptive self-organization: Continuous adjustment to maintain free-energy minima, mutual information structure, and recurrent cycle integrity
- The cognition space formalism unifies system complexity across domains, positing every agent in the plane—organizational and informational complexity—with serving as a global invariant metric. This accommodates transitions from basal (aneural) through neural to hybrid (human–AI) forms (Solé et al., 19 Jan 2026).
- Automata-theoretic invariants reveal sharp anatomical and behavioral thresholds in cognitive substrate power, e.g., the jump to indexed-grammar (nested-stack) capacities in humans (Granger, 2020).
7. Controversies and Limitations
Harnad distinguishes between simulation (matching causal topology) and true implementation (requiring matching of the dynamical substrate), asserting only the abstract causal graph is a substrate-independent invariant, and questioning whether full cognition (and feeling) is purely organizational or needs substrate-dependent dynamics (Harnad, 2012). This underlines a critical open debate on whether functionally-defined invariants suffice, or whether phenomenality is essentially bound to specific dynamics.
8. Synthesis
Substrate-independent invariants of cognition are mathematically and empirically robust quantities—mutual information, homology classes, algorithmic complexity, error-structure, stack memory topology, cycle-consistency in memory, or attractor landscape features—preserved across system implementations. They ground unified architectures for comparative cognition, provide substrates for rigorous engineering of novel intelligent systems, and reveal the fundamental organizing principles active in biological, artificial, and hybrid agents. Their identification clarifies the boundaries of cognitive capacity, enables cross-domain transfer of design principles, and sharpens the distinction between genuine cognitive implementation and mere symbolic or computational mirroring.
Key references: (Dodig-Crnkovic, 2024, Gauvrit et al., 2015, Li, 19 Aug 2025, Li, 3 Dec 2025, Vallverdu et al., 2017, Nelson et al., 2023, Granger, 2020, Hartl et al., 20 Jan 2026, Solé et al., 19 Jan 2026, Harnad, 2012)