ARB:大型語言模型的高級推理基準
ARB: Advanced Reasoning Benchmark for Large Language Models
July 25, 2023
作者: Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.AI
摘要
大型語言模型(LLMs)在各種量化推理和知識基準測試中展現出卓越的表現。然而,隨著LLMs得分不斷提高,許多這些基準測試的實用性正在下降,儘管在這些領域中尚未達到專家水準。我們引入了ARB,這是一個由多個領域中的高級推理問題組成的新型基準測試。ARB提供了比以往基準測試更具挑戰性的測試,其中包括數學、物理、生物學、化學和法律等問題。作為ARB的一部分,我們引入了一組具有挑戰性的數學和物理問題,這些問題需要高級符號推理和領域知識。我們對最近的模型(如GPT-4和Claude)在ARB上進行評估,並展示目前的模型在更具挑戰性的任務上得分低於50%。為了改進自動和輔助評估能力,我們引入了基於評分表的評估方法,使GPT-4能夠對自己的中間推理步驟進行評分。此外,我們對ARB的符號子集進行了人工評估,發現標註者和GPT-4評分表評估分數之間存在有希望的一致性。
English
Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores.