ChatPaper.aiChatPaper

AutoResearchBench:基於複雜科學文獻發現的人工智慧代理基準測試

AutoResearchBench: Benchmarking AI Agents on Complex Scientific Literature Discovery

April 28, 2026
作者: Lei Xiong, Kun Luo, Ziyi Xia, Wenbo Zhang, Jin-Ge Yao, Zheng Liu, Jingying Shao, Jianlyu Chen, Hongjin Qian, Xi Yang, Qian Yu, Hao Li, Chen Yue, Xiaan Du, Yuyang Wang, Yesheng Liu, Haiyu Xu, Zhicheng Dou
cs.AI

摘要

得益於人工智慧代理的發展,自主科學研究取得了顯著進展。該過程中的關鍵環節是尋找合適的科學文獻,無論是為研究問題探索現有知識,還是為驗證假設和支持主張獲取證據。為評估人工智慧代理驅動此過程的能力,我們提出AutoResearchBench——一個專注於自主科學文獻發現的基準測試。AutoResearchBench包含兩類互補的任務型態:(1)深度研究:要求通過漸進式多步驟探詢過程追蹤特定目標論文;(2)廣度研究:要求全面收集滿足給定條件的論文集合。相較於以往關於代理網路瀏覽的基準測試,AutoResearchBench在三個維度上具有顯著特徵:其具備研究導向性,需要對科學概念進行深入理解;文獻聚焦性,要求精細化利用細節資訊;以及開放性,涉及未知數量的合格論文,因而需要貫穿全程的審慎推理與搜索。這些特性使AutoResearchBench特別適合評估自主研究能力,同時也帶來極大挑戰。即使是最強大的大語言模型,儘管已基本攻克BrowseComp等通用代理網路瀏覽基準測試,在深度研究任務上僅達到9.39%的準確率,在廣度研究任務上僅獲得9.31%的交並比(IoU),而其他許多強基線模型表現均低於5%。我們公開釋出資料集與評估流程以促進該方向的未來研究。資料集、評估流程及程式碼已公開於:https://github.com/CherYou/AutoResearchBench。
English
Autonomous scientific research is significantly advanced thanks to the development of AI agents. One key step in this process is finding the right scientific literature, whether to explore existing knowledge for a research problem, or to acquire evidence for verifying assumptions and supporting claims. To assess AI agents' capability in driving this process, we present AutoResearchBench, a dedicated benchmark for autonomous scientific literature discovery. AutoResearchBench consists of two complementary task types: (1) Deep Research, which requires tracking down a specific target paper through a progressive, multi-step probing process, and (2) Wide Research, which requires comprehensively collecting a set of papers satisfying given conditions. Compared to previous benchmarks on agentic web browsing, AutoResearchBench is distinguished along three dimensions: it is research-oriented, calling for in-depth comprehension of scientific concepts; literature-focused, demanding fine-grained utilization of detailed information; and open-ended, involving an unknown number of qualified papers and thus requiring deliberate reasoning and search throughout. These properties make AutoResearchBench uniquely suited for evaluating autonomous research capabilities, and extraordinarily challenging. Even the most powerful LLMs, despite having largely conquered general agentic web-browsing benchmarks such as BrowseComp, achieve only 9.39% accuracy on Deep Research and 9.31% IoU on Wide Research, while many other strong baselines fall below 5%. We publicly release the dataset and evaluation pipeline to facilitate future research in this direction. We publicly release the dataset, evaluation pipeline, and code at https://github.com/CherYou/AutoResearchBench.
PDF251April 30, 2026