ChatPaper.aiChatPaper

Evaluating LLMs on Real-World Forecasting Against Human Superforecasters

July 6, 2025
Authors: Janna Lu
cs.AI

Abstract

Large language models (LLMs) have demonstrated remarkable capabilities across diverse tasks, but their ability to forecast future events remains understudied. A year ago, large language models struggle to come close to the accuracy of a human crowd. I evaluate state-of-the-art LLMs on 464 forecasting questions from Metaculus, comparing their performance against human superforecasters. Frontier models achieve Brier scores that ostensibly surpass the human crowd but still significantly underperform a group of superforecasters.

PDF12July 8, 2025