Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
144 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
46 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

What the F*ck Is Artificial General Intelligence? (2503.23923v1)

Published 31 Mar 2025 in cs.AI

Abstract: Artificial general intelligence (AGI) is an established field of research. Yet Melanie Mitchell and others have questioned if the term still has meaning. AGI has been subject to so much hype and speculation it has become something of a Rorschach test. Mitchell points out that the debate will only be settled through long term, scientific investigation. To that end here is a short, accessible and provocative overview of AGI. I compare definitions of intelligence, settling on intelligence in terms of adaptation and AGI as an artificial scientist. Taking my queue from Sutton's Bitter Lesson I describe two foundational tools used to build adaptive systems: search and approximation. I compare pros, cons, hybrids and architectures like o3, AlphaGo, AERA, NARS and Hyperon. I then discuss overall meta-approaches to making systems behave more intelligently. I divide them into scale-maxing, simp-maxing, w-maxing based on the Bitter Lesson, Ockham's and Bennett's Razors. These maximise resources, simplicity of form, and the weakness of constraints on functionality. I discuss examples including AIXI, the free energy principle and The Embiggening of LLMs. I conclude that though scale-maxed approximation dominates, AGI will be a fusion of tools and meta-approaches. The Embiggening was enabled by improvements in hardware. Now the bottlenecks are sample and energy efficiency.

Summary

  • The paper critically examines the concept of Artificial General Intelligence (AGI) and explores definitions, foundational approaches, and meta-strategies for its development.
  • It contrasts traditional search methods with modern approximation techniques like neural networks, advocating for hybrid systems that combine their strengths for AGI development.
  • The paper introduces meta-approaches like Scale-Maxing, Simp-Maxing, and W-Maxing as guiding principles for optimizing AGI development, highlighting the potential for future autonomous, adaptive systems.

An Expert Overview of "What the F*ck Is Artificial General Intelligence?"

The paper "What the F*ck Is Artificial General Intelligence?" by Michael Timothy Bennett offers a critical examination of the concept and implications of AGI within the field of artificial intelligence research. AGI represents a level of machine intelligence equivalent to human intellectual capacity across a wide range of tasks, challenging conventional boundaries of AI capabilities.

Definitions and Philosophical Foundations

The author identifies a pressing need to consolidate the various interpretations and theories surrounding AGI. While traditional positions link AGI to human-level performance, other definitions emphasize adaptability and the ability to satisfy goals across diverse environments. The paper suggests that intelligence should be aligned with adaptation under resource constraints, a definition aimed to transcend computational dualism and involve a holistic view of intelligence encompassing software, hardware, and environment.

Foundational Approaches: Search and Approximation

The paper delineates two primary methodologies for developing AGI: search and approximation. Search methods, typified by symbolic reasoning and planning algorithms such as A*, offer precision and interpretability but face scalability issues. Approximation methods, notably involving neural networks like CNNs and transformers, excel in large state spaces and noisy data but suffer from inefficiencies in energy and sample usage.

Hybrid Systems and Cognitive Architectures

Recognizing the limitations of standalone search or approximation strategies, Bennett advocates for hybrid systems that exploit the strengths of both methodologies. Examples like AlphaGo demonstrate the efficacy of combining search and approximation to navigate complex strategic scenarios. Furthermore, comprehensive architectures such as Hyperon and AERA suggest pathways to AGI through modular and self-organizing systems, integrating various cognitive functions for enhanced adaptability.

Meta-Approaches: Scale-Maxing, Simp-Maxing, and W-Maxing

The paper introduces meta-approaches as guiding principles for optimizing AGI development:

  1. Scale-Maxing: Leveraging increased computational resources has driven major advancements, as seen in The Embiggening of AI models like GPT-3. However, the paper cautions against the diminishing returns and inefficiencies in handling edge cases due to sample inefficiencies inherent in this approach.
  2. Simp-Maxing: Rooted in principles like the Minimum Description Length and Kolmogorov complexity, this approach favors simpler models for better generalization. While this traditional view holds theoretical appeal, it does not fully address the variability of intelligent behavior in real-world environments.
  3. W-Maxing: This innovative approach prioritizes maximizing the weakness of constraints on functionality, aligning with enactive cognition theories. By optimizing for both sample and energy efficiency, w-maxing aligns with biological systems' adaptability and delegates control to lower abstraction levels, promising a more holistic integration of AGI capabilities.

Implications and Future Directions

The paper intimates that the future of AGI lies in a synergistic ensemble of scale-maxing, simp-maxing, and w-maxing approaches, reflecting a fusion of methodologies. The author highlights the ongoing transition from reliance on large-scale models to hybrid and autonomous systems capable of sophisticated cognitive tasks. With economic implications on the horizon, architectures like Hyperon and AERA are poised to drive the development of models that emulate the adaptive intelligence of human scientists in dynamic environments.

Ultimately, Bennett's insightful survey challenges the AI research community to rethink foundational assumptions and explores strategic avenues for achieving AGI. By embracing a fusion of tools and philosophical perspectives, the potential for building truly adaptive and efficient artificial scientists becomes attainable.

Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews