- The paper proposes weakness as a proxy for intelligence by demonstrating that simpler models generalize more effectively across defined tasks.
- It critiques AIXI’s dependence on specific Universal Turing Machines, arguing that its Pareto optimality remains subjective.
- Integrating enactive cognition with pancomputationalism, the study redefines AGI and ASI through a context-aware, task-based framework.
An Examination of Enactivism and Objectively Optimal Super-Intelligence
This paper by Michael Timothy Bennett discusses the formulation of an artificial super-intelligence (ASI) and artificial general intelligence (AGI) paradigm grounded in enactive cognition and pancomputationalism. The paper challenges the conventional mind-body dualism prevalent in AI conceptions, focusing on the limitations posed by adopting AIXI as the theoretical model for AGI, asserting that its Pareto optimality is subjective due to its dependence on the choice of Universal Turing Machine (UTM).
Theoretical Framework
The paper combines several philosophical and computational perspectives: enactive cognition, pancomputationalism, and the use of weakness as a proxy for intelligence. Enactivism rejects mind-body dualism, affirming that cognition emerges from an interplay between the organism and its environment. This perspective requires a reevaluation of how computational models are structured, moving away from the notion of AGI as isolated algorithmic "minds" running on interchangeable hardware.
Revisiting AIXI’s Limitations
AIXI, as discussed in the paper, is a reinforcement learning model that utilizes Legg-Hutter intelligence to gauge performance. It employs Solomonoff Induction for inference; however, the reliability of its performance is contingent on the UTM utilized. This aspect renders the claim of its optimality subjective, as the choice of a different interpreter may lead to different performance evaluations. Consequently, the paper argues for a cognitive model where performance is not dependent on the choice of an interpreter, proposing the integration of enactive cognition in pancomputationalist contexts.
By treating cognition as integrated with its environment and employing pancomputationalism, the paper aims to delineate cognition task-wise, thus allowing for claims about performance that are interpreter-agnostic. It suggests modeling the "mind" as a subset of the environment wherein tasks are defined as comprehensive subproblems that incorporate cognitive intent embedded in specific context. This task-oriented framing draws from principles like Curry-Howard isomorphism, emphasizing an equivalence of declarative and imperative programs.
Toward Objectively Optimal Intelligence
The core contribution of the paper is the introduction of a new proxy for intelligence called "weakness," which is the cardinality of a model’s extension, contrasted against description length. The paper demonstrates that weaker models generalize better across tasks and proposes a new definition of AGI and ASI anchored in selecting models that maximize this generalization.
- AGI Definition: An AGI is proposed as a mechanism that selects optimal hypotheses for given tasks.
- ASI Definition: ASI is defined as an entity that optimizes vocabulary—akin to sensorimotor capability—to allow AGI to enact intelligence effectively for substantive utility.
The paper also points to practical considerations in measuring intelligence, focusing on how quickly an agent adapts or generalizes from limited data, moving from traditional asymptotic optimality to a pragmatic embodiment of intelligence.
Implications and Future Directions
This work introduces a refined perspective on intelligence, advancing discourses in theoretical AI by addressing subjectivity in performance evaluations. The implications span both practical applications and philosophical inquiries into the nature of cognition. Practically, the incorporation of environmental contexts provides a pathway for more robust AGI systems better suited for complex adaptive systems. Theoretically, it stretches beyond conventional paradigms, asking AI researchers to rethink the very fabric of what constitutes intelligence and generality in artificial entities.
In conclusion, Bennett’s treatment of intelligence as intrinsically linked with the fabric of the environment establishes a new frontier for examining the efficacy of AI systems beyond typical computational metrics. Future research directions will explore refining implementable languages and further integrating these ideations with modern neural architectures—essential for advancing toward truly context-aware AI systems.