Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
Gemini 2.5 Flash
Gemini 2.5 Flash 148 tok/s
Gemini 2.5 Pro 48 tok/s Pro
GPT-5 Medium 34 tok/s Pro
GPT-5 High 40 tok/s Pro
GPT-4o 101 tok/s Pro
Kimi K2 183 tok/s Pro
GPT OSS 120B 443 tok/s Pro
Claude Sonnet 4.5 35 tok/s Pro
2000 character limit reached

Pancomputational Enactivist Framework

Updated 12 November 2025
  • Pancomputational Enactivist Framework is a model that unifies enactivist cognitive science with computational dynamics, rejecting dualistic separations between software and hardware.
  • It formalizes cognition through labeled transition systems and abstraction layers, providing mathematically precise models of sensorimotor and agent–environment interactions.
  • The framework outlines objective metrics and safety trade-offs, enabling transparent AGI design through sample-efficient generalization and minimal sufficient refinements.

The Pancomputational Enactivist Framework constitutes an overview of enactivist cognitive science and the pancomputational view of physical and cognitive processes. Formally, it proposes that cognition, agency, and intelligence emerge from the explicit computational dynamics realized by the entire agent–environment system, void of any metaphysical dualisms between software and hardware or mind and body. This framework aims to provide mathematically precise models of cognition that comply with core tenets of enactivism while remaining computationally rigorous and tractable across both natural and artificial systems.

1. Pancomputational Ontology and Collapse of Dualism

At its core, the Pancomputational Enactivist Framework rejects "computational dualism"—the distinction between software as the locus of intelligent behavior and hardware as a passive interpreter (Bennett, 2023). Instead, it adopts a pancomputational ontology in which:

  • Every aspect of the environment—including cognition, body, world-laws, and goals—is a relation among irreducible states Φ\Phi.
  • A "declarative program" (or fact) is identified as fΦf \subseteq \Phi, with the collection of all programs P=2ΦP = 2^{\Phi}.
  • There is no privileged software layer running on a substrate; both sensory data and internal variables are simply facts among these irreducible states.
  • This ontological flattening eliminates the conceptual separation between cognitive policy and environmental interaction: cognition itself is an active transformation of environmental states.

This ontology underpins both formalization and objective evaluation of intelligence by explicitly integrating the agent into the environment as one coupled computational process.

2. Mathematical and System-Theoretic Formalizations

2.1 Labeled Transition Systems and Sensorimotor Dynamics

The formalization process recasts cognitive systems as labeled transition systems (LTS), where each system is a quintuple (S,A,O,Δ)(S, A, O, \Delta) (Weinstein et al., 2022):

  • SS: states of the agent–environment system.
  • AA: possible actions (motor commands).
  • OO: sensory observations.
  • ΔS×A×O×S\Delta \subseteq S \times A \times O \times S: the transition relation dictating how action–observation pairs yield state changes.

The coupling of brain, body, and world is modeled via the direct product of LTSs—no decomposition into “internal” and “external” systems is permitted, enforcing the enactivist principle of embodiment. All cognitive distinctions, including perceptual categories and attunements, are represented by labelings h:SLh: S \to L that classify state trajectories but do not attribute semantic or contentful representations to the agent.

2.2 Abstraction Layers and System Vocabulary

Formally, abstraction is achieved by selecting a finite vocabulary vP\mathfrak{v} \subseteq P of facts, defining the language Lv={lv:l}L_{\mathfrak{v}} = \{l \subseteq \mathfrak{v} : \bigcap l \neq \emptyset\}. An "aspect" lLvl \in L_{\mathfrak{v}} corresponds to a set of facts simultaneously instantiated and, operationally, relates to what the agent can sense or manipulate. The extension ElE_l enumerates all ways to further specify partial fact-sets (Bennett, 2023).

Tasks are specified as pairs α=Iα,Oα\alpha = \langle I_\alpha, O_\alpha \rangle, where IαI_\alpha is the set of permitted inputs (aspects) and OαO_\alpha is the set of correct outputs (extensions). Cognitive policies are then expressed as statements πLv\pi \in L_{\mathfrak{v}}, with correctness determined by how their extensions intersect the outputs for given inputs.

3. Enactivist Cognition: Embodiment, Emergence, and Simulation

3.1 Five Enactivist Tenets in Formal Encoding

The framework is rigorously aligned with five fundamental tenets of enactivist cognitive science (Weinstein et al., 2022):

  • Embodiment (EA1): Agent and environment are jointly modeled; there is no separable cognitive core.
  • Groundedness (EA2): The system contains no pre-given semantic representations; any justified distinctions are external labelings based on past sensorimotor interactions.
  • Emergence (EA3): Structures and categories arise from the dynamics of coupled agent–environment systems; new partitions emerge through enactment.
  • Attunement (EA4): A labeling is "sufficient" if it supports reliable prediction or goal achievement based on dynamic interaction histories.
  • Perception (EA5): Perception is the result of multi-step, skillful engagements, algorithmically deriving minimal sufficient refinements of base labelings.

3.2 Action–Perception Loops as Computation

Every computation in this framework, from sensor updates to motor commands, is a state transformation in the LTS. The enactive sense–act loop is realized concretely: inputs and policies are both subsets of the system vocabulary, and each inference corresponds to the selection of new facts, actively restructuring the coupled state (Bennett, 2023).

Mutual prediction and recursive simulation, especially in communicative architectures, are formalized as symmetric feedback loops. Perception is realized as search-driven inversion of generative models ("analysis-by-synthesis"), while production entails forward simulation of an action's inferred effect on the receiver (Moore, 2016).

4. Objective Metrics: Sufficiency, Generalisation, and Intelligence Limits

4.1 Sufficiency and Minimal Sufficient Refinements

A labeling or equivalence relation is "sufficient" if no two currently indistinguishable states can later be separated by any agent action. This is formalized as: (h(s)=h(t))(s,a,o,s)(t,a,o,t)    h(s)=h(t)(h(s) = h(t)) \wedge (s,a,o,s') \wedge (t,a,o',t') \implies h(s') = h(t') The minimal sufficient refinement is the coarsest partition refining a given equivalence that is still sufficient, unique by lattice-theoretic arguments. This encapsulates the concept of optimal attunement and is algorithmically computable (Weinstein et al., 2022).

4.2 Generalisation, Weakness, and Sample Efficiency

Generalisation is captured by whether a statement lLvl \in L_{\mathfrak{v}} belongs to the set of correct policies Πα\Pi_\alpha—if so, it constitutes a causal bridge between inputs IαI_\alpha and authorized outputs OαO_\alpha. Weakness is defined as l1<wl2l_1 <_w l_2 if El1<El2|E_{l_1}| < |E_{l_2}|, serving as the optimal proxy for structures most likely to generalize under uniform task distributions (Bennett, 2023). The upper bound for intelligence is characterized by maximizing sample efficiency, as formalized by the utility of the weakest correct policy in a given abstraction layer.

5. Pancomputational Generalization and Illustrative Examples

All theoretical constructs within the framework—transition relations, hierarchy, refinements, policies—are explicit computations. The pancomputational stance asserts:

  • Every physical or cognitive process can be mapped (possibly infinitarily) to a transition system.
  • Cognitive activity is a subset of such couplings and emergent refinements.
  • Meaning and content are not pre-imposed but emerge from minimal sufficient refinements within sensorimotor labelings (Weinstein et al., 2022).

Examples such as gridworld navigation and maze-solving demonstrate these concepts, with the derivation of minimal sufficient filters and refinements being fully algorithmic. Strategic sufficiency further generalizes attunement to goal-directed scenarios, whereas the degree of insufficiency quantifies the memory requirements for adequate prediction.

6. Architectural Implications and Contrast with Classical AI

The Pancomputational Enactivist Framework diverges sharply from symbolic and reactive AI architectures. Classical GOFAI and subsumption systems rely on static mappings or hand-crafted symbolic rules, whereas pancomputational enactivism implements:

  • Homogeneous networks of feedback loops (eschewing monolithic planning modules).
  • Online search and generative emulators for inference and action selection.
  • Mutual recursion for modeling both self and other in communicative tasks.
  • Layered feedback networks for adaptation at every perceptuo–motor interface.
  • The integration of motivation, emotion, and appraisal as controlled variables in higher-level loops (Moore, 2016).

Implications for design include constructing recursive, model-based, and fully embodied agents, in which every aspect of behavior is both world-involving and computationally explicit.

7. Limits, Safety, and Systemic Trade-offs

The abandonment of computational dualism has direct safety and tractability consequences for advanced intelligent systems, such as AGI:

  • Any system must explicitly choose an abstraction vocabulary v\mathfrak{v}, precluding the possibility of unmodeled, "black-box" substrate risk (Bennett, 2023).
  • The upper bound on sample-efficient generalisation is made explicit and optimal, tightly constraining rates of adaptation to novel tasks.
  • Trade-offs between vocabulary size v|\mathfrak{v}| and the utility ϵ\epsilon of intelligence can be formalized and optimized, giving principled means to manage AGI capability and risk.

A plausible implication is that AGI, constructed under these premises, would be fundamentally more transparent and auditable but also more limited in generalization than open-ended theoretical models suppose. This formal, pancomputational enactivist stance thus imparts both precise limits and operational resilience to the design of autonomous cognitive systems.

Forward Email Streamline Icon: https://streamlinehq.com

Follow Topic

Get notified by email when new papers are published related to Pancomputational Enactivist Framework.