Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Transformative AGI by 2043 is <1% likely (2306.02519v1)

Published 5 Jun 2023 in cs.AI

Abstract: This paper is a submission to the Open Philanthropy AI Worldviews Contest. In it, we estimate the likelihood of transformative artificial general intelligence (AGI) by 2043 and find it to be <1%. Specifically, we argue: The bar is high: AGI as defined by the contest - something like AI that can perform nearly all valuable tasks at human cost or less - which we will call transformative AGI is a much higher bar than merely massive progress in AI, or even the unambiguous attainment of expensive superhuman AGI or cheap but uneven AGI. Many steps are needed: The probability of transformative AGI by 2043 can be decomposed as the joint probability of a number of necessary steps, which we group into categories of software, hardware, and sociopolitical factors. No step is guaranteed: For each step, we estimate a probability of success by 2043, conditional on prior steps being achieved. Many steps are quite constrained by the short timeline, and our estimates range from 16% to 95%. Therefore, the odds are low: Multiplying the cascading conditional probabilities together, we estimate that transformative AGI by 2043 is 0.4% likely. Reaching >10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely. Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (2)
  1. Ari Allyn-Feuer (4 papers)
  2. Ted Sanders (3 papers)
Citations (2)

Summary

A Probabilistic Assessment of Transformative AGI by 2043

Ari Allyn-Feuer and Ted Sanders present a meticulous analysis of the likelihood of achieving transformative artificial general intelligence (AGI) by 2043 in their paper "Transformative AGI by 2043 is <1% likely," submitted to the Open Philanthropy AI Worldviews Contest. Their central thesis is that the probability of such an achievement is less than 1%, attributed to several necessary developments in software, hardware, and sociopolitical domains. The authors' framework demonstrates how likely we are to clear these hurdles and provides a systematic method to estimate the conditional probabilities nested within the potential roadmaps for AGI.

The paper is built on the premise that transformative AGI — a capability permitting AI systems to perform nearly all valuable tasks at human costs or less — presents a higher bar than other forms of advanced AI development. The authors dissect the necessity and the simultaneous improbability of several key steps being realized within the given timeframe, including advancements in algorithms, learning methods, robot bodies, and semiconductor production capabilities.

Core Arguments and Numerical Analysis

  1. Algorithmic and Learning Challenges:
    • The authors propose a 60% chance of fundamental algorithmic breakthroughs necessary for transformative AGI. Given recent advances such as Transformative and GANs, this seems moderately feasible. However, the paradigm shift to non-sequential reinforcement learning is unlikely, with a mere 40% probability. This shift is critically needed for AGIs to learn tasks efficiently without following the slow, sequential pattern natural to human learning.
  2. Computational Efficiency:
    • A crucial requirement posited is a 16% likelihood of lowering AGI inference costs to a competitive rate. This requires monumental strides in hardware efficiency and cost, potentially greater than a five-order magnitude decrease — a scenario deemed unlikely given current and forecasted delays in semiconductor advances.
  3. Physical and Economic Constraints:
    • Assuming AGI development, physical production such as scaling up semiconductor manufacturing and energy provision remains improbable. Even optimistic projections concede only a 46% chance, bespoke of constraints due to infrastructure investment cycles and physical resource limitations.
  4. Sociopolitical Stability:
    • War, pandemics, or economic depressions could derail AGI efforts, with a compounded probability of 0.4% that all necessary sociopolitical conditions allow uninterrupted progress. Notably, the geopolitical tensions involving semiconductor supplies from Taiwan amplify concerns of progress being abruptly impeded by international conflicts.

Analytical Framework and Future Perspectives

The authors employ a systematic probabilistic framework utilizing conditional probabilities to accrue the final probability. This approach is intended to challenge simplistic linear or deterministic views of technological evolution by emphasizing the interplay of multiple dependencies and the likelihood of unexpected delays. The skepticism inherent in this approach notably curtails overconfidence, advocating that the path towards transformative AGI is replete with significant, nontrivial impediments.

The authors' examination invites the AI research community to reassess optimistic AGI timelines critically and perhaps re-prioritize alignment and safety research without the pressure of imminent existential threats. The analysis naturally leads to a foresight that, while revolutionary advancements in AI will sprout by 2043, they will more likely serve as scaffolding towards transformative breakthroughs later in the century. They predict more substantial chances of achieving transformative AGI by 2100, offering a 41% likelihood by applying the same rigorous framework on an extended timeline.

Conclusion

Allyn-Feuer and Sanders provide a detailed probabilistic exploration of the potential journey to transformative AGI. Positioned in a strictly analytical field, this paper emphasizes the need for high scrutiny and nuanced understanding of the prerequisites for AGI as a tool for more precise anticipation of technological futures. This analysis is a call for measured expectations, fostering a dialogue on how best to allocate resources and attention in pursuit of AI that reshapes economies and industries within a sensible timeframe and ensures safer integration into societal structures.

Youtube Logo Streamline Icon: https://streamlinehq.com