FutureX:面向未来预测的LLM智能体高级实时基准平台
FutureX: An Advanced Live Benchmark for LLM Agents in Future Prediction
August 16, 2025
作者: Zhiyuan Zeng, Jiashuo Liu, Siyuan Chen, Tianci He, Yali Liao, Jinpeng Wang, Zaiyuan Wang, Yang Yang, Lingyue Yin, Mingren Yin, Zhenwei Zhu, Tianle Cai, Zehui Chen, Jiecao Chen, Yantao Du, Xiang Gao, Jiacheng Guo, Liang Hu, Jianpeng Jiao, Xiangsheng Li, Jingkai Liu, Shuang Ni, Zhoufutu Wen, Ge Zhang, Kaiyuan Zhang, Xin Zhou, Jose Blanchet, Xipeng Qiu, Mengdi Wang, Wenhao Huang
cs.AI
摘要
未来预测对于大型语言模型(LLM)代理而言是一项复杂的任务,需要高水平的分析思维、信息收集、上下文理解以及在不确定性下的决策能力。代理不仅需要收集并解读大量动态信息,还需整合多样化的数据源,权衡不确定性,并根据新兴趋势调整预测,正如人类专家在政治、经济及金融领域所做的那样。尽管其重要性不言而喻,但目前尚缺乏一个大规模基准来评估代理在未来预测方面的表现,这主要归因于处理实时更新和获取及时准确答案的挑战。为此,我们推出了FutureX,这是一个专为执行未来预测任务的LLM代理设计的动态实时评估基准。FutureX是当前最大且最多样化的未来预测实时基准,支持每日实时更新,并通过自动化流程收集问题与答案,有效避免了数据污染。我们对25个LLM/代理模型进行了评估,包括具备推理、搜索能力以及整合外部工具(如开源的深度研究代理与闭源的深度研究模型)的模型。这一全面评估旨在衡量代理在动态环境中的适应推理能力及表现。此外,我们还深入分析了代理在面向未来任务中的失败模式与性能瓶颈,包括对虚假网页的脆弱性及时间有效性。我们的目标是建立一个动态、无污染的评估标准,推动LLM代理在复杂推理与预测思维方面达到专业人类分析师的水平。
English
Future prediction is a complex task for LLM agents, requiring a high level of
analytical thinking, information gathering, contextual understanding, and
decision-making under uncertainty. Agents must not only gather and interpret
vast amounts of dynamic information but also integrate diverse data sources,
weigh uncertainties, and adapt predictions based on emerging trends, just as
human experts do in fields like politics, economics, and finance. Despite its
importance, no large-scale benchmark exists for evaluating agents on future
prediction, largely due to challenges in handling real-time updates and
retrieving timely, accurate answers. To address this, we introduce
FutureX, a dynamic and live evaluation benchmark specifically
designed for LLM agents performing future prediction tasks. FutureX is the
largest and most diverse live benchmark for future prediction, supporting
real-time daily updates and eliminating data contamination through an automated
pipeline for question gathering and answer collection. We evaluate 25 LLM/agent
models, including those with reasoning, search capabilities, and integration of
external tools such as the open-source Deep Research Agent and closed-source
Deep Research models. This comprehensive evaluation assesses agents' adaptive
reasoning and performance in dynamic environments. Additionally, we provide
in-depth analyses of agents' failure modes and performance pitfalls in
future-oriented tasks, including the vulnerability to fake web pages and the
temporal validity. Our goal is to establish a dynamic, contamination-free
evaluation standard that drives the development of LLM agents capable of
performing at the level of professional human analysts in complex reasoning and
predictive thinking.