ChatPaper.aiChatPaper

FutureX:面向未来预测的LLM智能体高级实时基准测试平台

FutureX: An Advanced Live Benchmark for LLM Agents in Future Prediction

August 16, 2025
作者: Zhiyuan Zeng, Jiashuo Liu, Siyuan Chen, Tianci He, Yali Liao, Jinpeng Wang, Zaiyuan Wang, Yang Yang, Lingyue Yin, Mingren Yin, Zhenwei Zhu, Tianle Cai, Zehui Chen, Jiecao Chen, Yantao Du, Xiang Gao, Jiacheng Guo, Liang Hu, Jianpeng Jiao, Xiangsheng Li, Jingkai Liu, Shuang Ni, Zhoufutu Wen, Ge Zhang, Kaiyuan Zhang, Xin Zhou, Jose Blanchet, Xipeng Qiu, Mengdi Wang, Wenhao Huang
cs.AI

摘要

未來預測對於大型語言模型(LLM)代理而言是一項複雜的任務,需要高度的分析思維、信息收集、情境理解以及在不确定性下的決策能力。代理不僅需要收集和解讀大量動態信息,還需整合多樣化的數據源,權衡不確定性,並根據新興趨勢調整預測,正如人類專家在政治、經濟和金融等領域所做的那樣。儘管其重要性不言而喻,但目前尚無大規模的基準來評估代理在未來預測方面的表現,這主要是由於處理實時更新和獲取及時準確答案的挑戰。為此,我們推出了FutureX,這是一個專為執行未來預測任務的LLM代理設計的動態實時評估基準。FutureX是最大且最多樣化的未來預測實時基準,支持每日實時更新,並通過自動化的問題收集和答案採集管道消除數據污染。我們評估了25個LLM/代理模型,包括具備推理、搜索能力以及整合外部工具(如開源的深度研究代理和閉源的深度研究模型)的模型。這一全面評估旨在衡量代理在動態環境中的適應性推理和表現。此外,我們還深入分析了代理在面向未來的任務中的失敗模式和性能缺陷,包括對虛假網頁的脆弱性和時間有效性。我們的目標是建立一個動態、無污染的評估標準,推動LLM代理在複雜推理和預測思維方面達到專業人類分析師的水平。
English
Future prediction is a complex task for LLM agents, requiring a high level of analytical thinking, information gathering, contextual understanding, and decision-making under uncertainty. Agents must not only gather and interpret vast amounts of dynamic information but also integrate diverse data sources, weigh uncertainties, and adapt predictions based on emerging trends, just as human experts do in fields like politics, economics, and finance. Despite its importance, no large-scale benchmark exists for evaluating agents on future prediction, largely due to challenges in handling real-time updates and retrieving timely, accurate answers. To address this, we introduce FutureX, a dynamic and live evaluation benchmark specifically designed for LLM agents performing future prediction tasks. FutureX is the largest and most diverse live benchmark for future prediction, supporting real-time daily updates and eliminating data contamination through an automated pipeline for question gathering and answer collection. We evaluate 25 LLM/agent models, including those with reasoning, search capabilities, and integration of external tools such as the open-source Deep Research Agent and closed-source Deep Research models. This comprehensive evaluation assesses agents' adaptive reasoning and performance in dynamic environments. Additionally, we provide in-depth analyses of agents' failure modes and performance pitfalls in future-oriented tasks, including the vulnerability to fake web pages and the temporal validity. Our goal is to establish a dynamic, contamination-free evaluation standard that drives the development of LLM agents capable of performing at the level of professional human analysts in complex reasoning and predictive thinking.
PDF563August 21, 2025