Introduction
The trajectory of AI is a subject of global significance, impacting various decision-making processes in the public sector, private industry, and academia. While the future of AI is hotly debated, there is no consensus among experts. In this light, a substantial survey was conducted to gain insight from AI researchers into predictions on AI progress and its potential social consequences. The survey encompassed 2,778 AI researchers from leading conferences and is part of a series of inquiries into experts' expectations about AI development.
Survey Scope and Methodology
The 2023 Expert Survey on Progress in AI (ESPAI) included researchers from an expanded set of six top AI conferences, marking a significant increase in the number of contributors compared to the previous year's survey. The questionnaire solicited responses via multiple-choice, probability estimates, and future year projections, aiming to probe the nature of future AI systems and the potential risks they may pose. To manage framing effects, questions were designed with subtle differences and distributed randomly among participants.
Results on AI Progress
According to the aggregated forecasts, there is a 50% chance that by 2028 AI systems could autonomously build a payment processing site, compose songs indistinguishable from popular musicians, and independently download and refine a LLM. The researchers anticipate that AI could outperform humans in every task by as early 2047, a prediction that has moved 13 years closer than in the prior year's survey. These predictions reflect both increasing optimism for the potential of AI and highlight an advancing timeline for achieving significant milestones.
Social Impacts and Concerns
When it comes to the social consequences of AI, the surveyed researchers shared a mix of optimism and caution. While the majority indicated a likelihood of positive outcomes, a notable share also acknowledged a significant risk of extremely negative scenarios, including the possibility of human extinction. More than half the respondents recommended "substantial" or "extreme" levels of concern for six AI-related risks, such as the spread of misinformation and authoritarian control. Disparities also emerged on the preferred pace of AI development, emphasizing a need for greater prioritization of research into reducing potential AI risks.
This survey represents one of the most comprehensive inquiries into the anticipations of AI researchers. It not only sheds light on the expected advancements in AI capabilities but also underscores the urgency to address the ethical, safety, and governance challenges posed by these rapidly developing technologies.