Papers
Topics
Authors
Recent
Search
2000 character limit reached

Substrate-Independent Invariants of Cognition

Updated 22 January 2026
  • Substrate-independent invariants are defined as enduring formal properties that persist across varying physical substrates, enabling precise cognitive comparisons.
  • They integrate information-theoretic, dynamical, and computational measures—such as mutual information, homology classes, and stack-nesting metrics—to quantify cognitive structures.
  • These invariants underpin the design of adaptive cognitive architectures by unifying analyses across biological, artificial, and hybrid human–machine systems.

A substrate-independent invariant of cognition is a formal, information-theoretic or structural property of an agent–environment system that remains constant, or transforms in precisely defined ways, when cognition is instantiated in different material substrates—whether neural, biochemical, mechanical, or computational. These invariants provide a rigorous framework for comparing, engineering, and analyzing cognition in biological organisms, artificial agents, hybrid human–machine collectives, and basal living systems at every scale.

1. Formal Definitions and General Criteria

A substrate-independent invariant of cognition is defined as a relational property I of system trajectories, preserved under isomorphisms between organizational structures and dynamical laws of differing physical substrates. Let SS and SS' be two systems with configuration spaces C,CC,\,C', dynamical transitions T,TT,\,T', and φ:(C,T)(C,T)\varphi: (C, T) \rightarrow (C', T') an isomorphism. The invariant II is a function of trajectory probability distributions such that

I[P(C0CT)]=I[P(φ(C0)φ(CT))]I[P(C_0 \rightarrow \ldots \rightarrow C_T)] = I[P'( \varphi(C_0) \rightarrow \ldots \rightarrow \varphi(C_T))]

where PP and PP' denote the distributions over histories in SS and SS' respectively (Dodig-Crnkovic, 2024).

These invariants arise in frameworks that treat cognition as emerging from the information-processing and dynamical organization of the system rather than its specific biophysical realization.

2. Taxonomy of Substrate-Independent Invariants

A. Information-Theoretic Invariants

  • Mutual information I(X;Y)I(X; Y) between subsystems captures integration and coordination:

I(X;Y)=x,yp(x,y)logp(x,y)p(x)p(y)I(X;Y) = \sum_{x,y} p(x,y) \log \frac{p(x,y)}{p(x)p(y)}

  • Entropy and multi-information, e.g., system-level H(X)H(X) and total correlation
  • Integrated information Φ\Phi (as in IIT), quantifying irreducibility of whole-system informational states (Dodig-Crnkovic, 2024)
  • Algorithmic complexity and probability, e.g., Kolmogorov complexity KU(s)K_U(s) and Solomonoff–Levin algorithmic probability mU(s)m_U(s), which are invariant up to constants across universal Turing machines (Gauvrit et al., 2015)

B. Dynamical and Topological Invariants

  • Memory-amortized cycles: Persistent homology classes [γ]H1(M)[\gamma] \in H_1(M) in the topological manifold of memory cycles, capturing the structural reuse and cycle-consistency underlying cognitive inference (Li, 19 Aug 2025, Li, 3 Dec 2025)
  • Attractor landscapes and multistability: Formal existence of multiple attractor states (fixed points or cycles) in the system’s dynamical evolution, independent of specific substrate (Vallverdu et al., 2017)
  • Lattice or morphospace of contexts and behavioral patterns, representing emergent complexity as the context structure rather than as a single scalar (Hall, 2018)
  • Causal topology: The abstract directed-graph structure of dependencies among system components, as formalized in causal topography (Harnad, 2012)

C. Computational and Automata-Theoretic Invariants

  • Stack-nesting and automata hierarchy: The tuple (S,k)(S,k), where SS is the number of independent stacks and kk is stack nesting depth, anchors cognitive computational power in the Chomsky hierarchy, invariant across physical realizations (Granger, 2020)
  • Morphological computation metrics, e.g. the MC ratio:

MC=I(Mt+1;Et)I(At+1;Et)MC = \frac{I(M_{t+1}; E_t)}{I(A_{t+1}; E_t)}

compares passive body computation and explicit control, invariant under isomorphic morpho-dynamics (Dodig-Crnkovic, 2024)

D. Embedding and Navigation Invariants

  • Remapping and navigation in embedding spaces: Cognition as the iterative error-minimizing traversal (via gradient or associative retrieval) in embedding or latent state spaces, formalized as:

zt+1=ztηzL(zt)z_{t+1} = z_t - \eta\,\nabla_z L(z_t)

where LL is an error or energy functional, and the remapping Rθ:XZR_\theta: X \to Z absorbs new information (Hartl et al., 20 Jan 2026)

E. Unified Capacity Metrics

  • Cognitive capacity (C\mathcal{C}) as geometric mean: Defined via mutual information-based sensing (SS), processing (PP), and action (AA) sub-capacities,

C=(SPA)1/3\mathcal{C} = (S P A)^{1/3}

with S,P,AS, P, A measured from empirical trajectories, independent of substrate (Solé et al., 19 Jan 2026).

3. Core Examples Across Biological and Artificial Substrates

System Invariant Reference
Bacterial colonies I(Ci;Cj)I(C_i; C_j) for gene-expression (Dodig-Crnkovic, 2024)
Neural circuits/robotics MC ratio, FF-minimization, Φ\Phi (Dodig-Crnkovic, 2024, Li, 3 Dec 2025)
Slime mold (Physarum) Multistability, memory kernels, I(X;Y)I(X; Y) (Vallverdu et al., 2017)
Deep neural nets (LaMa) Spatial integration, error structure (Nelson et al., 2023)
Human language/recursion Stack-nesting tuple (S,k)(S, k) (Granger, 2020)
Memory-amortized cortex Homology classes H1(M)H_1(M) (Li, 19 Aug 2025, Li, 3 Dec 2025)
AI transformer/diffusion Embedding remap/error-minimization (Hartl et al., 20 Jan 2026)

These examples exemplify invariance for the cognitive phenotype across disparate substrates—e.g., spatial pattern interpolation in both human frontal cortex and convolutional nets; multistability and feedback-driven adaptation in both Physarum and swarms; or stable attractor cycles in both cortical circuitry and motor controller robots.

4. Mathematical Structures Underpinning Invariants

4.1 Info-Computational Formulations

  • Morphological computation: Mt+1=F(Mt,Et)M_{t+1} = F(M_t, E_t) for morpho-dynamical state
  • Substrate-independence axiom: If (M,F)(M,F)(M,F) \cong (M',F') via isomorphism φ\varphi, then cognitive trajectories are informationally equivalent (Dodig-Crnkovic, 2024)
  • Algorithmic explanations: The structure of cognitive input–output can be completely described by KU(s)K_U(s) and mU(s)m_U(s) up to additive or multiplicative constants (Gauvrit et al., 2015)

4.2 Dynamical-Systems and Topological Invariants

  • Chain complexes/homology: Cognitive state-space described as a simplicial complex XX, with homological invariants bk=rankHk(X)b_k = \text{rank}\,H_k(X) classifying recurrent structures (Li, 3 Dec 2025)
  • Pattern-formation and attractor criteria: Symmetry-breaking instabilities and the existence of attractor basins are formalized using the Jacobian JJ and diffusion matrix DD, e.g., det(Jk2D)<0\det(J - k^2 D) < 0 for some kk (Vallverdu et al., 2017)

4.3 Behavioral and Information-Processing Metrics

  • Geometric means over sensory, processing, action channels: C\mathcal{C} as a function of mutual information flows (Solé et al., 19 Jan 2026)
  • Difficulty and error-structure invariants: Item-difficulty curves p(i)p(i) and confusion matrices in cognitive task performance, invariant under substrate substitution (Nelson et al., 2023)

5. Maintenance and Generation of Invariants: Self-Organization and Autopoiesis

Cognitive invariants are dynamically maintained by processes of:

  • Self-assembly, ensuring local rules yield globally coherent informational structures
  • Self-organization, guiding the p(C)p(C) distributions toward high mutual information or low free energy regimes
  • Autopoiesis, guaranteeing boundary conditions and energetic fluxes preserve the invariant computational structure over time (e.g., Ct+1=G(Ct)C_{t+1} = G(C_t) implies I(Ct;Ct+k)=constantI(C_t; C_{t+k}) = \text{constant} for large kk) (Dodig-Crnkovic, 2024)
  • Amortization over cycles: Memory-amortized inference frameworks (MAI) highlight how topological cycles in memory serve as reusable invariants for rapid, context-specific inference while minimizing computational and energetic cost (Li, 19 Aug 2025, Li, 3 Dec 2025)

6. Implications for Cognitive Architectures and Comparative Analysis

  • Architectural design principles for AGI and bio-inspired AI follow directly from these invariants, prescribing:
    • Isomorphic informational topology: Hardware instantiating the same connectivity and cycle structure as neural substrates
    • Morphological offloading: Physical bodies exploiting passive dynamics to minimize central control burden
    • Adaptive self-organization: Continuous adjustment to maintain free-energy minima, mutual information structure, and recurrent cycle integrity
  • The cognition space formalism unifies system complexity across domains, positing every agent in the (O,I)(\mathcal{O}, \mathcal{I}) plane—organizational and informational complexity—with C\mathcal{C} serving as a global invariant metric. This accommodates transitions from basal (aneural) through neural to hybrid (human–AI) forms (Solé et al., 19 Jan 2026).
  • Automata-theoretic invariants reveal sharp anatomical and behavioral thresholds in cognitive substrate power, e.g., the jump to indexed-grammar (nested-stack) capacities in humans (Granger, 2020).

7. Controversies and Limitations

Harnad distinguishes between simulation (matching causal topology) and true implementation (requiring matching of the dynamical substrate), asserting only the abstract causal graph is a substrate-independent invariant, and questioning whether full cognition (and feeling) is purely organizational or needs substrate-dependent dynamics (Harnad, 2012). This underlines a critical open debate on whether functionally-defined invariants suffice, or whether phenomenality is essentially bound to specific dynamics.

8. Synthesis

Substrate-independent invariants of cognition are mathematically and empirically robust quantities—mutual information, homology classes, algorithmic complexity, error-structure, stack memory topology, cycle-consistency in memory, or attractor landscape features—preserved across system implementations. They ground unified architectures for comparative cognition, provide substrates for rigorous engineering of novel intelligent systems, and reveal the fundamental organizing principles active in biological, artificial, and hybrid agents. Their identification clarifies the boundaries of cognitive capacity, enables cross-domain transfer of design principles, and sharpens the distinction between genuine cognitive implementation and mere symbolic or computational mirroring.

Key references: (Dodig-Crnkovic, 2024, Gauvrit et al., 2015, Li, 19 Aug 2025, Li, 3 Dec 2025, Vallverdu et al., 2017, Nelson et al., 2023, Granger, 2020, Hartl et al., 20 Jan 2026, Solé et al., 19 Jan 2026, Harnad, 2012)

Topic to Video (Beta)

No one has generated a video about this topic yet.

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to Substrate-Independent Invariant of Cognition.