2000 character limit reached
Evaluating LLMs on Real-World Forecasting Against Human Superforecasters (2507.04562v1)
Published 6 Jul 2025 in cs.LG, cs.AI, and cs.CL
Abstract: LLMs have demonstrated remarkable capabilities across diverse tasks, but their ability to forecast future events remains understudied. A year ago, LLMs struggle to come close to the accuracy of a human crowd. I evaluate state-of-the-art LLMs on 464 forecasting questions from Metaculus, comparing their performance against human superforecasters. Frontier models achieve Brier scores that ostensibly surpass the human crowd but still significantly underperform a group of superforecasters.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.