Papers
Topics
Authors
Recent
Search
2000 character limit reached

What the F*ck Is Artificial General Intelligence?

Published 31 Mar 2025 in cs.AI | (2503.23923v2)

Abstract: Artificial general intelligence (AGI) is an established field of research. Yet some have questioned if the term still has meaning. AGI has been subject to so much hype and speculation it has become something of a Rorschach test. Melanie Mitchell argues the debate will only be settled through long term, scientific investigation. To that end here is a short, accessible and provocative overview of AGI. I compare definitions of intelligence, settling on intelligence in terms of adaptation and AGI as an artificial scientist. Taking my cue from Sutton's Bitter Lesson I describe two foundational tools used to build adaptive systems: search and approximation. I compare pros, cons, hybrids and architectures like o3, AlphaGo, AERA, NARS and Hyperon. I then discuss overall meta-approaches to making systems behave more intelligently. I divide them into scale-maxing, simp-maxing, w-maxing based on the Bitter Lesson, Ockham's and Bennett's Razors. These maximise resources, simplicity of form, and the weakness of constraints on functionality. I discuss examples including AIXI, the free energy principle and The Embiggening of LLMs. I conclude that though scale-maxed approximation dominates, AGI will be a fusion of tools and meta-approaches. The Embiggening was enabled by improvements in hardware. Now the bottlenecks are sample and energy efficiency.

Summary

  • The paper critically examines AGI definitions and architectures by integrating search and approximation methods to enhance adaptive intelligence.
  • It highlights hybrid models, such as neuro-symbolic and cognitive architectures, which combine neural networks with symbolic reasoning to overcome key limitations.
  • The paper explores optimization strategies—scale-maxing, simp-maxing, and w-maxing—to effectively balance resource scaling with model simplicity and adaptability.

Overview of "What the F*ck Is Artificial General Intelligence?"

AGI is a field marked by substantial hype and pervasive ambiguity. Despite its roots in rigorous scientific exploration, AGI often serves as a Rorschach test for varying expectations and misconceptions. The paper critically examines AGI definitions, suggests plausible architectures, and explores foundational technologies for developing such systems. The discourse pivots towards intelligence as adaptability and posits AGI as an artificial scientist.

Defining Intelligence and AGI

Intelligence is treated as adaptability within resource constraints, diverging from traditional human-centric definitions. Chollet and Legg-Hutter's contributions propose intelligence as the ability to generalize or satisfy diverse goals across varied environments. A crucial critique lies in computational dualism, which treats intelligence as software separated from the hardware environment—an approach the paper challenges. Intelligence is reformulated as a measure of a system's capability to complete tasks, emphasizing internal cohesion and environmental interaction without strict separation between goals and intelligence.

Tools for Building AGI: Search and Approximation

  • Search: Commonly used in planning and theorem proving, search involves exploring structured spaces to establish solutions. While offering precision and interpretability, search is not trivially scalable due to computational complexity in large state spaces.
  • Approximation: Dominating contemporary AI, approximation employs models to map inputs to outputs by learning from data. Although computationally parallelizable and robust to unstructured data, this approach suffers from sample and energy inefficiency.

Hybrid architectures emerge as promising by combining strengths of search and approximation, thus bolstering generalization and autonomy.

Hybrid Architectures and AGI Frameworks

Several hybrid models and architectures are discussed:

  • AlphaGo exemplifies a synergy of search for optimal moves and neural networks for outcome prediction.
  • Neuro-symbolic hybrids merge neural networks for interpreting raw data with symbolic processing for high-level reasoning, tackling the symbol grounding problem.
  • Cognitive architectures like Hyperon, AERA, and NARS aim at holistic cognition through modular integration of distinctive functional modules like perception, learning, and reasoning.

These architectures illustrate diverse approaches to achieving AGI, moving beyond classical singular heuristic solutions.

Meta-Approaches: Optimization Principles

Meta-approaches guide how systems are optimized to behave intelligently. Three main philosophies are discussed:

  • Scale-Maxing: Leveraging resources for performance gains. Despite success in models like GPT-3, scalability confronts diminishing returns and environmental inefficiency due to high computational costs.
  • Simp-Maxing: Emphasizing simplicity of models to improve generalization, drawing on theories like Kolmogorov Complexity and the Minimum Description Length Principle. While offering a framework for producing parsimonious models, simplicity does not inherently ensure adaptability across diverse contexts.
  • W-Maxing: Advocates weakening the constraints on functionality to optimize for adaptability and autonomy, resembling principles observed in biological self-organizing systems.

Each approach poses unique implications for developing adaptable, scalable systems capable of general learning and reasoning.

Conclusion

Contemporary advancements in AGI suggest scalability and resource dependence, yet, as gains diminish, solutions likely reside in hybrid approaches and principled optimization strategies. The paper articulates that adaptable intelligence—akin to a human scientist—relies on integrating multiple approaches to learning, reasoning, and adapting efficiently. The future of AGI lies in harmonizing these varied methodologies to create systems that reflect true generalization capacity, poised for autonomously navigating complex environments.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We found no open problems mentioned in this paper.

Authors (1)

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 17 tweets with 9311 likes about this paper.

Reddit