Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

There is no Artificial General Intelligence (1906.05833v2)

Published 9 Jun 2019 in cs.AI and cs.CL

Abstract: The goal of creating AGI -- or in other words of creating Turing machines (modern computers) that can behave in a way that mimics human intelligence -- has occupied AI researchers ever since the idea of AI was first proposed. One common theme in these discussions is the thesis that the ability of a machine to conduct convincing dialogues with human beings can serve as at least a sufficient criterion of AGI. We argue that this very ability should be accepted also as a necessary condition of AGI, and we provide a description of the nature of human dialogue in particular and of human language in general against this background. We then argue that it is for mathematical reasons impossible to program a machine in such a way that it could master human dialogue behaviour in its full generality. This is (1) because there are no traditional explicitly designed mathematical models that could be used as a starting point for creating such programs; and (2) because even the sorts of automated models generated by using machine learning, which have been used successfully in areas such as machine translation, cannot be extended to cope with human dialogue. If this is so, then we can conclude that a Turing machine also cannot possess AGI, because it fails to fulfil a necessary condition thereof. At the same time, however, we acknowledge the potential of Turing machines to master dialogue behaviour in highly restricted contexts, where what is called ``narrow'' AI can still be of considerable utility.

An Essay on "There is no Artificial General Intelligence"

The paper "There is no Artificial General Intelligence" by Jobst Landgrebe and Barry Smith engages with the debate around artificial general intelligence (AGI), primarily disputing the feasibility of achieving AGI, particularly by making comparisons with human intelligence and language capabilities. The authors contend that efforts to create AGI must include the capacity of machines to conduct fluent and convincing dialogues with humans, arguing that this capability is not just a sufficient criterion of AGI, as some have claimed, but a necessary one.

At the core of the paper is the assertion that, for mathematical reasons, it is fundamentally impossible to program machines that can fully emulate human dialogue behavior. Two main factors inform this view: there are no robust mathematical models for effectively capturing the nuances of human dialogue, and current machine learning models, which have shown successes in applications like machine translation, are inadequate in handling the complexities of human dialogue. This argument is grounded in the natural and inherent stochastic nature of conversation and the deep contextual factors influencing human dialogue, which are not easily captured algorithmically.

The authors distinguish between narrow AI, which can perform well in limited contexts, and AGI, which requires a broader spectrum of human-like capabilities. They posit that Turing machines, or modern computers, can master dialogues in highly restricted contexts, highlighting the potential utility of narrow AI. However, the vast scope and context-dependent nature of human communication present insurmountable hurdles for creating AGI.

The implications of this research are notable both practically and theoretically. On a practical level, it questions the very design and implementation of AI systems, advocating instead for highly specialized applications within narrow domains. Theoretically, this paper enforces a boundary on what is achievable with computational models concerning human-level intelligence and capabilities, pushing for a reevaluation of existing assumptions and perhaps a shift towards understanding realistic accomplishments within AI research rather than aiming for AGI.

The authors further delve into the biology of human language usage as an evolved capability shaped by neurological and social developments over millennia, noting the variability and context-driven nature of language. This contrasts with the static predefined nature of algorithmic models. They suggest that any model capable of replicating human-like dialogue would necessitate a form of embodiment and experiential learning, which current technological paradigms cannot simulate.

The paper thus argues for a refocusing in AI research, recognizing the fundamental limitations posed by attempting to emulate human dialogue behavior and proposing a shift toward enhancing narrow AI systems that can still provide considerable utility without the unattainable ambition of AGI. This epistemological stance has ramifications for AI ethics, policy-making, and setting realistic objectives for AI applications in the future. In summary, Landgrebe and Smith assert a decisive stance against the prospects of achieving AGI, encouraging a more grounded and realistic approach in exploiting AI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. J. Landgrebe (1 paper)
  2. B. Smith (86 papers)
Citations (8)
X Twitter Logo Streamline Icon: https://streamlinehq.com
Youtube Logo Streamline Icon: https://streamlinehq.com