Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Artificial intelligence is algorithmic mimicry: why artificial "agents" are not (and won't be) proper agents (2307.07515v4)

Published 27 Jun 2023 in cs.AI

Abstract: What is the prospect of developing artificial general intelligence (AGI)? I investigate this question by systematically comparing living and algorithmic systems, with a special focus on the notion of "agency." There are three fundamental differences to consider: (1) Living systems are autopoietic, that is, self-manufacturing, and therefore able to set their own intrinsic goals, while algorithms exist in a computational environment with target functions that are both provided by an external agent. (2) Living systems are embodied in the sense that there is no separation between their symbolic and physical aspects, while algorithms run on computational architectures that maximally isolate software from hardware. (3) Living systems experience a large world, in which most problems are ill-defined (and not all definable), while algorithms exist in a small world, in which all problems are well-defined. These three differences imply that living and algorithmic systems have very different capabilities and limitations. In particular, it is extremely unlikely that true AGI (beyond mere mimicry) can be developed in the current algorithmic framework of AI research. Consequently, discussions about the proper development and deployment of algorithmic tools should be shaped around the dangers and opportunities of current narrow AI, not the extremely unlikely prospect of the emergence of true agency in artificial systems.

Citations (6)

Summary

  • The paper reveals that AI systems, defined by algorithmic mimicry, lack autopoiesis and cannot self-create goals.
  • It demonstrates that the separation of software and hardware in algorithms leads to indirect and constrained interaction with the environment.
  • The paper highlights that AI’s confinement to well-defined symbolic worlds prevents it from addressing the open-ended challenges of natural, complex systems.

Overview of Algorithmic Mimicry

Recent discussions in the domain of AI have brought attention to the capabilities and limitations of algorithms in the context of reaching a state of AI comparable to natural intelligence. A comprehensive evaluation reveals some fundamental distinctions between living organisms and algorithmic systems which can significantly impact the development and application of AI.

Autopoiesis and Agency

At the core of the comparison between biological entities and algorithms is the concept of autopoiesis - the capacity of an organism to self-create and maintain itself. This capability bestows upon living systems the unique trait of setting their own goals, driven by an intrinsic will to survive and reproduce. Algorithms, however, rely on goals and environments established by external entities, most often humans. This inherent discrepancy suggests a profound chasm in the potential for agency between organisms and AI systems.

Embodiment in Living Systems vs. Algorithms

Embodiment conveys the integration of hardware and software in an entity. In living systems, this integration is seamless, fluid, and intrinsic to their function, making their interaction with the environment direct and context-dependent. In stark contrast, algorithms operate on a predefined and isolated computational platform where 'hardware' and 'software' are clearly split, resulting in an indirect and less adaptive engagement with the physical world.

Large vs. Small Worlds

Organisms navigate complex and ambiguity-ridden environments - large worlds with innumerable undefined or undefinable problems. They have evolved cognitive strategies to identify and grapple with relevant issues for survival. Conversely, AI exists in small, symbolic worlds confined to well-defined computational problems where everything, including problem scope and relevance, is predetermined. This results in AI's inability to emulate the flexibility and openness inherent to natural systems.

Conclusion on AGI Prospects

Bearing in mind the philosophical and organizational barriers articulated, it becomes evident that the current trajectory of AI research, focused heavily on computational efficiency and complexity, is unlikely to culminate in true artificial general intelligence (AGI) mirroring natural agency or consciousness. While AI systems like LLMs offer impressive mimicry of certain cognitive tasks, equating their performance with human-like intelligence is a fundamental misconception of their capabilities. Therefore, more attention should be paid to the implications and regulation of existing narrow AI applications rather than the unlikely emergence of autonomous AGI entities.

Youtube Logo Streamline Icon: https://streamlinehq.com