Papers
Topics
Authors
Recent
2000 character limit reached

Incomputability of AGI

Updated 9 December 2025
  • The paper details how AGI is defined to require open-ended, adaptive intelligence but faces insurmountable obstacles from formal computability limits such as the halting problem and Gödel’s incompleteness.
  • It examines current AI architectures, including neural networks and universal intelligence models, highlighting their inability to transcend finite, static function spaces.
  • The review discusses practical workarounds like finite approximations that enable incremental progress but sacrifice the full generality and creative capacity essential to AGI.

AGI is widely conceptualized as an engineered agent capable of adaptive, open-ended, and creative mastery across any domain or context, exhibiting properties that match or exceed those of human intelligence. However, the ambition of constructing AGI faces fundamental theoretical obstructions rooted in computability, mathematical logic, and the architectures of existing artificial intelligence systems. These barriers are formalized through a spectrum of impossibility theorems, reduction arguments, and analyses of representational and adaptability limitations. The incomputability of AGI is not merely a practical challenge, but a mathematically substantiated constraint—arising from the very fabric of algorithmic and formal reasoning. This article reviews the principal theorems, formal definitions, architectural critiques, and computability-theoretic obstacles that collectively define the incomputability of AGI.

1. Formal Definitions of AGI and the Scope of Incomputability

AGI is typically delineated via functional, behavioral, and creativity-centric criteria. One foundational definition posits AGI as a system S with an initial set of computable capabilities F0={f1,...,fn}F_0 = \{f_1, ..., f_n\}, which must be able to generate, given some input xx, a genuinely new function fF0f^* \notin F_0—that is, “unlock new and previously unknown functional capabilities in that field” (Mappouras et al., 4 Dec 2025). Alternatively, AGI is operationalized as a system that, for every task TT in a broad task family T\mathcal{T}, and every instance xx for which the correct answer is verifiable by a human, returns that answer with nonzero probability (“artificial general intelligence for T\mathcal{T}”) (Panigrahy et al., 25 Sep 2025). Other characterizations require mastery of open-ended dialogue, integrating linguistic, contextual, and non-verbal cues on arbitrary topics, such that its behavior is indistinguishable from that of a fully competent human interlocutor (Landgrebe et al., 2019).

These definitions uniformly assume generality, creativity, and open-ended adaptation as necessary conditions, closely paralleling the operational and philosophical stances advanced by Turing (via the Imitation Game), Brooks (arthropod intelligence), Legg and Hutter (universal intelligence), and prominent AGI research programs (Schaul et al., 2011, Cooper, 2013, Bennett, 2022, Landgrebe et al., 2021, Bui, 23 Nov 2025).

2. Computability-Theoretic Barriers: The Halting Problem and Diagonalization

The spectral barrier to AGI’s computability arises directly from classical results in computability theory, notably Turing’s undecidability of the halting problem and Gödel’s incompleteness theorems (Cooper, 2013). If one formalizes AGI via universal intelligence (Legg and Hutter’s Υ(π)\Upsilon(\pi)), the metric is incomputable: it sums over all computable environments using incomputable weights (2K(μ)2^{-K(\mu)}, with K(μ)K(\mu) the Kolmogorov complexity), and the set of environments is infinite. Since K(μ)K(\mu) is uncomputable, and verifying optimality over all environments reduces to solving the halting problem, such definitions are intrinsically nonconstructive (Schaul et al., 2011, Landgrebe et al., 2021).

Diagonalization techniques show that even with strict safety and trust requirements, any system that never outputs a false answer must abstain (“?”) on certain human-decidable instances—those that encode self-reference—thereby failing to satisfy the AGI definition in its full generality (Panigrahy et al., 25 Sep 2025). For example, systemically undecidable instances (such as for program verification or planning) can always be constructed that force the system to fail to provide answers where a human can prove one.

These insights are formally encoded in incompatibility theorems: ¬S  [  Safe(S)    Trust(S)    AGI(S)][2509.21654]\neg\exists\,S\;\bigl[\;\text{Safe}(S)\;\wedge\;\text{Trust}(S)\;\wedge\;\text{AGI}(S)\bigr] \qquad [2509.21654] establishing the impossibility, under strict interpretations, of building a simultaneously safe/trusted AGI system for a rich class of computational tasks.

3. Proofs of Incomputability from Algorithmic and Circuit Models

From a recursion-theoretic perspective, any algorithm—realized as a finite Boolean circuit—cannot generate new functional capabilities outside the compositional closure of its initial function set. Formally, for any algorithm AA implemented by kk NAND gates with initial repertoire F0F_0, it is impossible to produce an output corresponding to a function fF0f^* \notin \langle F_0\rangle (closure under composition and permutation) (Mappouras et al., 4 Dec 2025):

¬A,  fF0,  xX:A(x)=f(x)\neg\exists A,\;\exists f^*\notin F_0,\;\exists x\in X:\quad A(x)=f^*(x)

The minimal-circuit argument demonstrates that whatever combinatorial arrangements an algorithm can embody, its space of behaviors is strictly confined to the closure generated by its original set of gates and logical primitives. No combination, permutation, or nesting of existing computable functions can yield a mapping genuinely outside of this set. The compositional algebra is finite and fixed; “true innovation”—the central hallmark of AGI—is excluded.

4. Incomputability in Universal Intelligence, Reinforcement Learning, and Asymptotic Optimality

Legg and Hutter's universal intelligence formalism (Υ(π)\Upsilon(\pi)) is incomputable for two structural reasons: (1) uncomputability of Kolmogorov complexity, and (2) the necessity to sum over an infinite class of computable environments (Schaul et al., 2011). The reinforcement learning formalism AIXI is similarly incomputable by virtue of its reliance on Solomonoff’s universal prior, which incorporates uncomputable quantities (Bennett, 2022). Any attempt to define or realize asymptotically optimal agents, even in the weakest “average performance” sense across all computable environments, results in inherently non-computable or even non-existent agents, depending on discounting schemes (Lattimore et al., 2011). No computable policy can achieve weak asymptotic optimality on the class of all computable environments.

The fundamental insight is captured in the following table:

Formalism Incomputability Source Implication for AGI
Universal intelligence Kolmogorov complexity, Σ\Sigma over computable μ\mu AGI measure uncomputable
AIXI Solomonoff prior (KC) Agent cannot be constructed
Asymptotic optimality Diagonalization, uncomputable reward-optimal policy Any optimal agent is incomputable

5. Structural and Archictectural Barriers: Neural Network Paradigms and Gödelian Incompleteness

Beyond foundational logic and computability, the architectures of current AI systems impose severe limitations. Neural-network-based agents, regardless of scale or training regime, are statically instantiated function approximators. The Universal Approximation Theorem is strictly limited to static mappings over compact domains; it does not confer self-modification, contextual adaptability, or dynamic structural evolution (Bui, 23 Nov 2025). Scaling laws (e.g., empirical parameter-count/loss relationships) provide no mechanism to transcend the intrinsic expressivity ceiling of formal systems.

The Gödelian incompleteness argument, as recast for neural networks, asserts that any computational system—however powerful—operates within some formal system FF, and thus cannot exhibit unbounded or self-referential reasoning capacity. AGI, as an instantiation of “strong AI,” is impossible under this paradigm since it cannot “see” truths (e.g., Gödel sentences G(F)G(F)) that are not provable within its own structure.

Architecturally, true AGI would require frameworks that combine existential computational facilities with higher-order architectural organization, including explicit support for dynamic restructuring, metaprogramming, and the emergence of recursively self-modifying representations—capacities absent from current neural paradigms (Bui, 23 Nov 2025).

6. Dialogue, Language, and the Impossibility of Explicit Model-Based or Data-Driven AGI

The mathematical and statistical obstacles are particularly apparent in the domain of open-ended language and dialogue. Full human-level conversational mastery demands unbounded context dependence, non-stationarity, and adaptation to generative processes that exceed any finite corpus or fixed probabilistic model (Landgrebe et al., 2019). There exists no finite system of equations—deterministic or stochastic—that captures the infinite, shifting context of human conversation. No learning algorithm, regardless of being end-to-end, generative adversarial, or reinforcement learning, can be trained on any finite or recursively enumerable dataset to achieve the requisite coverage and contextual disambiguation demanded by true AGI.

These results formally reduce the task of full dialogue-level AGI to the halting problem: if AGI were achievable for such tasks, one could use it as a black box oracle to decide the halting problem, contradicting Turing undecidability (Cooper, 2013, Landgrebe et al., 2019).

7. Approaches to Circumventing Incomputability: Weakening or Redefining Generality

Finite approximations—such as restricting the universal intelligence measure to environments with bounded runtime/description length (Levin complexity), or using sampling-based Monte Carlo over a constrained Game Description Language—can render versions of AGI computable in practice, but at the cost of genuine generality (Schaul et al., 2011). The “anytime” evaluation metric or the “weakness” proxy (a computable, generalization-oriented alternative to Kolmogorov complexity) allow for practical benchmarking and incremental progress toward broader, but still fundamentally non-general intelligences (Bennett, 2022). Such modifications provide frameworks for system evaluation and competition but acknowledge the principled gap between these practical surrogates and the original, incomputable ideal of AGI.

Ultimately, unless foundational mathematical results are circumvented—via physical or non-computable oracle-like processes, or via architectures that step outside the Church-Turing framework—full AGI, as the union of safety, trust, creativity, compositional generality, and unbounded adaptability, remains formally incomputable (Cooper, 2013, Mappouras et al., 4 Dec 2025, Panigrahy et al., 25 Sep 2025, Landgrebe et al., 2019, Landgrebe et al., 2021, Bui, 23 Nov 2025).

Whiteboard

Follow Topic

Get notified by email when new papers are published related to Incomputability of Artificial General Intelligence.