ChatPaper.aiChatPaper

ScienceAgentBench:朝着数据驱动科学发现的语言代理严格评估的方向前进

ScienceAgentBench: Toward Rigorous Assessment of Language Agents for Data-Driven Scientific Discovery

October 7, 2024
作者: Ziru Chen, Shijie Chen, Yuting Ning, Qianheng Zhang, Boshi Wang, Botao Yu, Yifei Li, Zeyi Liao, Chen Wei, Zitong Lu, Vishal Dey, Mingyi Xue, Frazier N. Baker, Benjamin Burns, Daniel Adu-Ampratwum, Xuhui Huang, Xia Ning, Song Gao, Yu Su, Huan Sun
cs.AI

摘要

语言模型(LLMs)的进展引起了越来越多的人对开发基于LLM的语言代理以实现科学发现的全自动化的兴趣,这引发了人们对这类代理真正能力的兴奋和怀疑。在这项工作中,我们认为,要使代理完全自动化科学发现,它必须能够完成工作流程中的所有基本任务。因此,我们呼吁在对全自动化提出大胆声明之前,对代理在科学工作流中的各项任务进行严格评估。为此,我们提出了ScienceAgentBench,这是一个用于评估基于数据驱动科学发现的语言代理的新基准。为确保我们基准的科学真实性和现实相关性,我们从四个学科的44篇同行评议出版物中提取了102个任务,并邀请了九位学科专家对其进行验证。我们将每个任务的目标输出统一为一个独立的Python程序文件,并采用一系列评估指标来检查生成的程序、执行结果和成本。每个任务都经过多轮注释者和学科专家的手动验证,以确保其注释质量和科学合理性。我们还提出了两种有效策略来减轻数据污染的担忧。利用我们的基准,我们评估了五个开源和专有LLMs,每个LLM使用三种框架:直接提示、OpenHands和自我调试。每个任务有三次尝试,表现最佳的代理只能独立解决32.4%的任务,而在专家提供知识的情况下可解决34.3%。这些结果突显了当前语言代理在为数据驱动发现生成代码方面的有限能力,更不用说全自动化进行科学研究了。
English
The advancements of language language models (LLMs) have piqued growing interest in developing LLM-based language agents to automate scientific discovery end-to-end, which has sparked both excitement and skepticism about the true capabilities of such agents. In this work, we argue that for an agent to fully automate scientific discovery, it must be able to complete all essential tasks in the workflow. Thus, we call for rigorous assessment of agents on individual tasks in a scientific workflow before making bold claims on end-to-end automation. To this end, we present ScienceAgentBench, a new benchmark for evaluating language agents for data-driven scientific discovery. To ensure the scientific authenticity and real-world relevance of our benchmark, we extract 102 tasks from 44 peer-reviewed publications in four disciplines and engage nine subject matter experts to validate them. We unify the target output for every task to a self-contained Python program file and employ an array of evaluation metrics to examine the generated programs, execution results, and costs. Each task goes through multiple rounds of manual validation by annotators and subject matter experts to ensure its annotation quality and scientific plausibility. We also propose two effective strategies to mitigate data contamination concerns. Using our benchmark, we evaluate five open-weight and proprietary LLMs, each with three frameworks: direct prompting, OpenHands, and self-debug. Given three attempts for each task, the best-performing agent can only solve 32.4% of the tasks independently and 34.3% with expert-provided knowledge. These results underscore the limited capacities of current language agents in generating code for data-driven discovery, let alone end-to-end automation for scientific research.

Summary

AI-Generated Summary

PDF212November 16, 2024