SciBench:評估大型語言模型在大學科學問題解決能力方面的表現
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models
July 20, 2023
作者: Xiaoxuan Wang, Ziniu Hu, Pan Lu, Yanqiao Zhu, Jieyu Zhang, Satyen Subramaniam, Arjun R. Loomba, Shichang Zhang, Yizhou Sun, Wei Wang
cs.AI
摘要
最近對大型語言模型(LLMs)的進展展示了在許多數學基準測試中取得的顯著進步。然而,大多數這些基準測試僅包含基於初高中科目的問題,僅包含多重選擇題,並且僅限於有限範圍的基本算術運算。為了解決這些問題,本文介紹了一個廣泛的基準測試套件SciBench,旨在系統地檢驗複雜科學問題解決所需的推理能力。SciBench包含兩個精心策劃的數據集:一個開放集,包含從性質、化學和物理教科書中提取的一系列大學級科學問題,以及一個封閉集,包含來自計算機科學和數學本科考試的問題。基於這兩個數據集,我們對兩個具有不同提示策略的代表性LLMs進行了深入的基準測試研究。結果顯示,目前的LLMs在表現上仍然存在不足,整體得分僅為35.80%。此外,通過詳細的用戶研究,我們將LLMs所犯的錯誤分為十種解決問題能力。我們的分析表明,沒有單一提示策略顯著優於其他策略,而一些策略在某些解決問題技能上表現出改進,卻導致其他技能下降。我們預見SciBench將促進LLMs推理能力的進一步發展,從而最終有助於科學研究和發現。
English
Recent advances in large language models (LLMs) have demonstrated notable
progress on many mathematical benchmarks. However, most of these benchmarks
only feature problems grounded in junior and senior high school subjects,
contain only multiple-choice questions, and are confined to a limited scope of
elementary arithmetic operations. To address these issues, this paper
introduces an expansive benchmark suite SciBench that aims to systematically
examine the reasoning capabilities required for complex scientific problem
solving. SciBench contains two carefully curated datasets: an open set
featuring a range of collegiate-level scientific problems drawn from
mathematics, chemistry, and physics textbooks, and a closed set comprising
problems from undergraduate-level exams in computer science and mathematics.
Based on the two datasets, we conduct an in-depth benchmark study of two
representative LLMs with various prompting strategies. The results reveal that
current LLMs fall short of delivering satisfactory performance, with an overall
score of merely 35.80%. Furthermore, through a detailed user study, we
categorize the errors made by LLMs into ten problem-solving abilities. Our
analysis indicates that no single prompting strategy significantly outperforms
others and some strategies that demonstrate improvements in certain
problem-solving skills result in declines in other skills. We envision that
SciBench will catalyze further developments in the reasoning abilities of LLMs,
thereby ultimately contributing to scientific research and discovery.