Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
41 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
41 tokens/sec
o3 Pro
7 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Thousands of AI Authors on the Future of AI (2401.02843v2)

Published 5 Jan 2024 in cs.CY, cs.AI, and cs.LG

Abstract: In the largest survey of its kind, 2,778 researchers who had published in top-tier AI venues gave predictions on the pace of AI progress and the nature and impacts of advanced AI systems The aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028, including autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician, and autonomously downloading and fine-tuning a LLM. If science continues undisrupted, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey we conducted only one year earlier [Grace et al., 2022]. However, the chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey). Most respondents expressed substantial uncertainty about the long-term value of AI progress: While 68.3% thought good outcomes from superhuman AI are more likely than bad, of these net optimists 48% gave at least a 5% chance of extremely bad outcomes such as human extinction, and 59% of net pessimists gave 5% or more to extremely good outcomes. Between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction. More than half suggested that "substantial" or "extreme" concern is warranted about six different AI-related scenarios, including misinformation, authoritarian control, and inequality. There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more.

Introduction

The trajectory of AI is a subject of global significance, impacting various decision-making processes in the public sector, private industry, and academia. While the future of AI is hotly debated, there is no consensus among experts. In this light, a substantial survey was conducted to gain insight from AI researchers into predictions on AI progress and its potential social consequences. The survey encompassed 2,778 AI researchers from leading conferences and is part of a series of inquiries into experts' expectations about AI development.

Survey Scope and Methodology

The 2023 Expert Survey on Progress in AI (ESPAI) included researchers from an expanded set of six top AI conferences, marking a significant increase in the number of contributors compared to the previous year's survey. The questionnaire solicited responses via multiple-choice, probability estimates, and future year projections, aiming to probe the nature of future AI systems and the potential risks they may pose. To manage framing effects, questions were designed with subtle differences and distributed randomly among participants.

Results on AI Progress

According to the aggregated forecasts, there is a 50% chance that by 2028 AI systems could autonomously build a payment processing site, compose songs indistinguishable from popular musicians, and independently download and refine a LLM. The researchers anticipate that AI could outperform humans in every task by as early 2047, a prediction that has moved 13 years closer than in the prior year's survey. These predictions reflect both increasing optimism for the potential of AI and highlight an advancing timeline for achieving significant milestones.

Social Impacts and Concerns

When it comes to the social consequences of AI, the surveyed researchers shared a mix of optimism and caution. While the majority indicated a likelihood of positive outcomes, a notable share also acknowledged a significant risk of extremely negative scenarios, including the possibility of human extinction. More than half the respondents recommended "substantial" or "extreme" levels of concern for six AI-related risks, such as the spread of misinformation and authoritarian control. Disparities also emerged on the preferred pace of AI development, emphasizing a need for greater prioritization of research into reducing potential AI risks.

This survey represents one of the most comprehensive inquiries into the anticipations of AI researchers. It not only sheds light on the expected advancements in AI capabilities but also underscores the urgency to address the ethical, safety, and governance challenges posed by these rapidly developing technologies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (48)
  1. 2022 Expert Survey on Progress in AI. AI Impacts, Aug 2022. URL https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2022_expert_survey_on_progress_in_ai.
  2. OpenAI. Moving AI governance forward, Jul 2023. URL https://openai.com/blog/moving-ai-governance-forward.
  3. Center for Human-compatible Artificial Intelligence. Research Publications – Center for Human-Compatible Artificial Intelligence, 2023. URL https://humancompatible.ai/research.
  4. Gavin Newsom. EXECUTIVE ORDER N-12-23, Sep 2023. URL https://www.gov.ca.gov/wp-content/uploads/2023/09/AI-EO-No.12-_-GGN-Signed.pdf.
  5. AI.gov. Making AI Work for the American People, 2023. URL https://ai.gov.
  6. Inter-Agency Working Group on Artificial Intelligence. Principles for the Ethical Use of Artificial Intelligence in the United Nations System (Advanced unedited version), September 2022.
  7. Views of prominent ai developers on risk from ai, 2023. URL https://wiki.aiimpacts.org/arguments_for_ai_risk/views_of_ai_developers_on_risk_from_ai.
  8. Pablo Villalobos. Scaling laws literature review, 2023. URL https://epochai.org/blog/scaling-laws-literature-review. Accessed: 2023-12-22.
  9. Discontinuous Progres Investigation. Technical report, AI Impacts, 2021. URL https://wiki.aiimpacts.org/ai_timelines/discontinuous_progress_investigation.
  10. Stephen M. Omohundro. The basic ai drives. In Proceedings of the 2008 Conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference, page 483–492, NLD, 2008. IOS Press. ISBN 9781586038335.
  11. Ai deception: A survey of examples, risks, and potential solutions. arXiv preprint arXiv:2308.14752, 2023.
  12. Charles I Jones. The ai dilemma: Growth versus existential risk. Technical report, National Bureau of Economic Research, 2023.
  13. Economic growth under transformative ai. Technical report, National Bureau of Economic Research, 2023.
  14. Future of Life Institute. Pause Giant AI Experiments: An Open Letter, 2023. URL https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
  15. Center for AI Safety. Statement on AI Risk: AI experts and public figures express their concerns about AI risk., 2023. URL https://www.safe.ai/statement-on-ai-risk.
  16. Joseph R Biden. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, 2023. URL https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
  17. Dario Amodei. Written testimony of dario amodei, ph.d. co-founder and ceo, anthropic; for a hearing on “oversight of a.i.: Principles for regulation”; before the judiciary committee subcommittee on privacy, technology, and the law; united states senate, Jul 2023. URL https://www.judiciary.senate.gov/imo/media/doc/2023-07-26_-_testimony_-_amodei.pdf.
  18. gov.uk. AI Safety Summit 2023 - GOV.UK, November 2023. URL https://www.gov.uk/government/topical-events/ai-safety-summit-2023.
  19. European Parliament. EU AI Act: first regulation on artificial intelligence. Accessed June, 25:2023, 2023. URL https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence.
  20. When will ai exceed human performance? evidence from ai experts. Journal of Artificial Intelligence Research, 62:729–754, 2018a.
  21. Zachary Stein-Perlman. Surveys of US public opinion on AI, 2023. URL https://wiki.aiimpacts.org/responses_to_ai/public_opinion_on_ai/surveys_of_public_opinion_on_ai/surveys_of_us_public_opinion_on_ai.
  22. The state of AI in 2023: Generative AI’s breakout year. Technical report, McKinsey & Company, 2023.
  23. AI Impacts. 2023 Expert Survey on Progress in AI [Survey PDF], 2023a. URL https://wiki.aiimpacts.org/_media/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_espai_paid.pdf.
  24. The framing of decisions and the psychology of choice. science, 211(4481):453–458, 1981.
  25. AI Impacts. 2023 Expert Survey on Progress in AI [AI Impacts Wiki], 2023b. URL https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/2023_expert_survey_on_progress_in_ai.
  26. O*NET. All Job Family Occupations, 2023. URL https://www.onetonline.org/find/family?f=0.
  27. Joseph Carlsmith. Is Power-Seeking AI an Existential Risk? arXiv preprint arXiv:2206.13353, 2022.
  28. Stuart Russell. Of myths and moonshine, 2014. URL https://www.edge.org/conversation/the-myth-of-ai#26015.
  29. Philip E. Tetlock. Expert Political Judgment: How Good Is It? How Can We Know? Princeton University Press, Princeton, 2005. ISBN 9781400888818. doi:10.1515/9781400888818. URL https://doi.org/10.1515/9781400888818.
  30. A strategy to improve expert technology forecasts. Proceedings of the National Academy of Sciences, 118(21):e2021558118, 2021.
  31. James Surowiecki. The Wisdom of Crowds: Why the many are smarter than the few and how collective wisdom shapes business, Eeonomies, societies and nations. Doubleday & Co., 2004.
  32. Forecasting existential risks: Evidence from a long-run forecasting tournament, 2023.
  33. Michael Braun Hamilton. Online survey response rates and times: Background and guidance for industry. Tercent, Inc, 2003.
  34. National Research Council (US) Committee on Assessing Fundamental Attitudes of Life Scientists as a Basis for Biosecurity Education. A survey of attitudes and actions on dual use research in the life sciences: A collaborative effort of the national research council and the american association for the advancement of science. Technical report, American Association for the Advancement of Science and National Research Council and others, 2009.
  35. 2023 Expert Survey on Progress in AI, Oct 2023. URL https://osf.io/8gzdr.
  36. AI Impacts. AI Timeline Surveys, 2023c. URL https://wiki.aiimpacts.org/ai_timelines/predictions_of_human-level_ai_timelines/ai_timeline_surveys/ai_timeline_surveys.
  37. Artificial intelligence and economic growth. Technical report, National Bureau of Economic Research, 2017.
  38. Dennis Bray and Hans von Storch. Climate science: An empirical example of postnormal science. Bulletin of the American Meteorological Society, 80(3):439–456, 1999.
  39. CliSci2008: A survey of the perspectives of climate scientists concerning climate science and climate change. GKSS-Forschungszentrum Geesthacht Geesthacht, 2010.
  40. Examining the scientific consensus on climate change. Eos, Transactions American Geophysical Union, 90(3):22–23, 2009.
  41. Viewpoint: when will AI exceed human performance? Evidence from AI Experts. Journal of Artificial Intelligence Research, pages 729–754, 2018b. doi:10.1613/jair.1.11222.
  42. The causes and consequences of response rates in surveys by the news media and government contractor survey research firms. Advances in telephone survey methodology, pages 499–528, 2007.
  43. Thomas R Stewart. Scientists’ uncertainty and disagreement about global climate change: A psychological perspective. International Journal of Psychology, 26(5):565–573, 1991.
  44. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological review, 90(4):293, 1983.
  45. The future prospects of energy technologies: Insights from expert elicitations. Review of Environmental Economics and Policy, 12(1), 2016.
  46. Mail surveys for election forecasting? an evaluation of the columbus dispatch poll. Public Opinion Quarterly, 60(2):181–227, 1996.
  47. Expert elicitation survey predicts 37% to 49% declines in wind energy costs by 2050. Nature Energy, 6(5):555–565, 2021.
  48. Forecasting ai progress: Evidence from a survey of machine learning researchers. arXiv preprint arXiv:2206.04132, 2022.
User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (6)
  1. Katja Grace (2 papers)
  2. Harlan Stewart (1 paper)
  3. Julia Fabienne Sandkühler (1 paper)
  4. Stephen Thomas (16 papers)
  5. Ben Weinstein-Raun (2 papers)
  6. Jan Brauner (9 papers)
Citations (29)
Youtube Logo Streamline Icon: https://streamlinehq.com

HackerNews

Reddit Logo Streamline Icon: https://streamlinehq.com

Reddit