CURIE:评估大语言模型在多任务科学长文本理解与推理中的表现
CURIE: Evaluating LLMs On Multitask Scientific Long Context Understanding and Reasoning
March 14, 2025
作者: Hao Cui, Zahra Shamsi, Gowoon Cheon, Xuejian Ma, Shutong Li, Maria Tikhanovskaya, Peter Norgaard, Nayantara Mudur, Martyna Plomecka, Paul Raccuglia, Yasaman Bahri, Victor V. Albert, Pranesh Srinivasan, Haining Pan, Philippe Faist, Brian Rohr, Michael J. Statt, Dan Morris, Drew Purves, Elise Kleeman, Ruth Alcantara, Matthew Abraham, Muqthar Mohammad, Ean Phing VanLee, Chenfei Jiang, Elizabeth Dorfman, Eun-Ah Kim, Michael P Brenner, Viren Jain, Sameera Ponda, Subhashini Venugopalan
cs.AI
摘要
科学问题解决涉及信息综合与专业知识应用。我们推出CURIE,一个科学长文本理解、推理与信息抽取基准,旨在衡量大型语言模型(LLMs)在科学问题解决及辅助科学家实际工作流程中的潜力。该基准引入了十项挑战性任务,共计580个问题与解答对,由材料科学、凝聚态物理、量子计算、地理空间分析、生物多样性和蛋白质六个领域的专家精心编撰,覆盖了科学实验与理论工作流程。我们评估了一系列封闭与开放LLMs在CURIE任务上的表现,这些任务要求领域专长、长上下文信息理解及多步推理能力。尽管Gemini Flash 2.0和Claude-3在各领域展现出持续的高理解力,但广受欢迎的GPT-4o和command-R+在蛋白质测序任务上表现显著不佳。所有模型的最佳成绩仅为32%,表明仍有巨大提升空间。我们期望CURIE的洞见能指引LLMs在科学领域的未来发展。评估代码与数据详见https://github.com/google/curie。
English
Scientific problem-solving involves synthesizing information while applying
expert knowledge. We introduce CURIE, a scientific long-Context
Understanding,Reasoning and Information Extraction benchmark to measure the
potential of Large Language Models (LLMs) in scientific problem-solving and
assisting scientists in realistic workflows. This benchmark introduces ten
challenging tasks with a total of 580 problems and solution pairs curated by
experts in six disciplines - materials science, condensed matter physics,
quantum computing, geospatial analysis, biodiversity, and proteins - covering
both experimental and theoretical work-flows in science. We evaluate a range of
closed and open LLMs on tasks in CURIE which requires domain expertise,
comprehension of long in-context information,and multi-step reasoning. While
Gemini Flash 2.0 and Claude-3 show consistent high comprehension across
domains, the popular GPT-4o and command-R+ fail dramatically on protein
sequencing tasks. With the best performance at 32% there is much room for
improvement for all models. We hope that insights gained from CURIE can guide
the future development of LLMs in sciences. Evaluation code and data are in
https://github.com/google/curieSummary
AI-Generated Summary