AutoResearchBench:基于复杂科学文献发现的AI智能体基准测试平台
AutoResearchBench: Benchmarking AI Agents on Complex Scientific Literature Discovery
April 28, 2026
作者: Lei Xiong, Kun Luo, Ziyi Xia, Wenbo Zhang, Jin-Ge Yao, Zheng Liu, Jingying Shao, Jianlyu Chen, Hongjin Qian, Xi Yang, Qian Yu, Hao Li, Chen Yue, Xiaan Du, Yuyang Wang, Yesheng Liu, Haiyu Xu, Zhicheng Dou
cs.AI
摘要
得益于智能体技术的发展,自主科研能力实现了显著突破。其中关键环节在于精准定位相关科学文献——无论是为研究问题探索现有知识体系,还是为验证假设和支撑论点获取证据。为评估智能体驱动这一过程的能力,我们推出专用于自主文献发现的基准测试平台AutoResearchBench。该平台包含两项互补任务类型:(1)深度研究:通过渐进式多步探询定位特定目标论文;(2)广度研究:全面收集满足给定条件的论文集合。相较于以往基于智能体网页浏览的基准测试,AutoResearchBench具有三大特色:研究导向性,要求深入理解科学概念;文献聚焦性,需要精细利用详细信息;开放终结性,涉及未知数量的合格论文,需进行全局审慎推理与搜索。这些特性使AutoResearchBench特别适合评估自主科研能力,同时也带来极大挑战。即使是最强大的大语言模型,在BrowseComp等通用网页浏览基准测试中表现优异,但在深度研究任务中仅达到9.39%准确率,广度研究任务中仅取得9.31%交并比,而其他强基线模型表现均低于5%。我们公开数据集、评估流程及代码(https://github.com/CherYou/AutoResearchBench)以促进该领域后续研究。
English
Autonomous scientific research is significantly advanced thanks to the development of AI agents. One key step in this process is finding the right scientific literature, whether to explore existing knowledge for a research problem, or to acquire evidence for verifying assumptions and supporting claims. To assess AI agents' capability in driving this process, we present AutoResearchBench, a dedicated benchmark for autonomous scientific literature discovery. AutoResearchBench consists of two complementary task types: (1) Deep Research, which requires tracking down a specific target paper through a progressive, multi-step probing process, and (2) Wide Research, which requires comprehensively collecting a set of papers satisfying given conditions. Compared to previous benchmarks on agentic web browsing, AutoResearchBench is distinguished along three dimensions: it is research-oriented, calling for in-depth comprehension of scientific concepts; literature-focused, demanding fine-grained utilization of detailed information; and open-ended, involving an unknown number of qualified papers and thus requiring deliberate reasoning and search throughout. These properties make AutoResearchBench uniquely suited for evaluating autonomous research capabilities, and extraordinarily challenging. Even the most powerful LLMs, despite having largely conquered general agentic web-browsing benchmarks such as BrowseComp, achieve only 9.39% accuracy on Deep Research and 9.31% IoU on Wide Research, while many other strong baselines fall below 5%. We publicly release the dataset and evaluation pipeline to facilitate future research in this direction. We publicly release the dataset, evaluation pipeline, and code at https://github.com/CherYou/AutoResearchBench.