Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
125 tokens/sec
GPT-4o
10 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
3 tokens/sec
DeepSeek R1 via Azure Pro
51 tokens/sec
2000 character limit reached

Computational Dualism and Objective Superintelligence (2302.00843v7)

Published 2 Feb 2023 in cs.AI and math.LO

Abstract: The concept of intelligent software is flawed. The behaviour of software is determined by the hardware that "interprets" it. This undermines claims regarding the behaviour of theorised, software superintelligence. Here we characterise this problem as "computational dualism", where instead of mental and physical substance, we have software and hardware. We argue that to make objective claims regarding performance we must avoid computational dualism. We propose a pancomputational alternative wherein every aspect of the environment is a relation between irreducible states. We formalise systems as behaviour (inputs and outputs), and cognition as embodied, embedded, extended and enactive. The result is cognition formalised as a part of the environment, rather than as a disembodied policy interacting with the environment through an interpreter. This allows us to make objective claims regarding intelligence, which we argue is the ability to "generalise", identify causes and adapt. We then establish objective upper bounds for intelligent behaviour. This suggests AGI will be safer, but more limited, than theorised.

Citations (5)

Summary

  • The paper proposes weakness as a proxy for intelligence by demonstrating that simpler models generalize more effectively across defined tasks.
  • It critiques AIXI’s dependence on specific Universal Turing Machines, arguing that its Pareto optimality remains subjective.
  • Integrating enactive cognition with pancomputationalism, the study redefines AGI and ASI through a context-aware, task-based framework.

An Examination of Enactivism and Objectively Optimal Super-Intelligence

This paper by Michael Timothy Bennett discusses the formulation of an artificial super-intelligence (ASI) and artificial general intelligence (AGI) paradigm grounded in enactive cognition and pancomputationalism. The paper challenges the conventional mind-body dualism prevalent in AI conceptions, focusing on the limitations posed by adopting AIXI as the theoretical model for AGI, asserting that its Pareto optimality is subjective due to its dependence on the choice of Universal Turing Machine (UTM).

Theoretical Framework

The paper combines several philosophical and computational perspectives: enactive cognition, pancomputationalism, and the use of weakness as a proxy for intelligence. Enactivism rejects mind-body dualism, affirming that cognition emerges from an interplay between the organism and its environment. This perspective requires a reevaluation of how computational models are structured, moving away from the notion of AGI as isolated algorithmic "minds" running on interchangeable hardware.

Revisiting AIXI’s Limitations

AIXI, as discussed in the paper, is a reinforcement learning model that utilizes Legg-Hutter intelligence to gauge performance. It employs Solomonoff Induction for inference; however, the reliability of its performance is contingent on the UTM utilized. This aspect renders the claim of its optimality subjective, as the choice of a different interpreter may lead to different performance evaluations. Consequently, the paper argues for a cognitive model where performance is not dependent on the choice of an interpreter, proposing the integration of enactive cognition in pancomputationalist contexts.

Formalising Cognition Beyond Dualism

By treating cognition as integrated with its environment and employing pancomputationalism, the paper aims to delineate cognition task-wise, thus allowing for claims about performance that are interpreter-agnostic. It suggests modeling the "mind" as a subset of the environment wherein tasks are defined as comprehensive subproblems that incorporate cognitive intent embedded in specific context. This task-oriented framing draws from principles like Curry-Howard isomorphism, emphasizing an equivalence of declarative and imperative programs.

Toward Objectively Optimal Intelligence

The core contribution of the paper is the introduction of a new proxy for intelligence called "weakness," which is the cardinality of a model’s extension, contrasted against description length. The paper demonstrates that weaker models generalize better across tasks and proposes a new definition of AGI and ASI anchored in selecting models that maximize this generalization.

  • AGI Definition: An AGI is proposed as a mechanism that selects optimal hypotheses for given tasks.
  • ASI Definition: ASI is defined as an entity that optimizes vocabulary—akin to sensorimotor capability—to allow AGI to enact intelligence effectively for substantive utility.

The paper also points to practical considerations in measuring intelligence, focusing on how quickly an agent adapts or generalizes from limited data, moving from traditional asymptotic optimality to a pragmatic embodiment of intelligence.

Implications and Future Directions

This work introduces a refined perspective on intelligence, advancing discourses in theoretical AI by addressing subjectivity in performance evaluations. The implications span both practical applications and philosophical inquiries into the nature of cognition. Practically, the incorporation of environmental contexts provides a pathway for more robust AGI systems better suited for complex adaptive systems. Theoretically, it stretches beyond conventional paradigms, asking AI researchers to rethink the very fabric of what constitutes intelligence and generality in artificial entities.

In conclusion, Bennett’s treatment of intelligence as intrinsically linked with the fabric of the environment establishes a new frontier for examining the efficacy of AI systems beyond typical computational metrics. Future research directions will explore refining implementable languages and further integrating these ideations with modern neural architectures—essential for advancing toward truly context-aware AI systems.

Dice Question Streamline Icon: https://streamlinehq.com

Follow-up Questions

We haven't generated follow-up questions for this paper yet.

Youtube Logo Streamline Icon: https://streamlinehq.com
Reddit Logo Streamline Icon: https://streamlinehq.com