ChatPaper.aiChatPaper

ARB:大型语言模型的高级推理基准

ARB: Advanced Reasoning Benchmark for Large Language Models

July 25, 2023
作者: Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.AI

摘要

大型语言模型(LLMs)在各种定量推理和知识基准测试中展现出卓越的性能。然而,随着LLMs得分不断提高,尽管尚未达到这些领域的专家水平,许多基准测试的实用性正在下降。我们引入了ARB,这是一个由多个领域中的高级推理问题组成的新型基准测试。ARB提供了比以往基准测试更具挑战性的测试,涵盖了数学、物理、生物学、化学和法律等领域的问题。作为ARB的一个子集,我们引入了一组具有挑战性的数学和物理问题,这些问题需要高级符号推理和领域知识。我们对最近的模型,如GPT-4和Claude,在ARB上进行评估,并表明当前模型在更具挑战性的任务上得分远低于50%。为了改进自动和辅助评估能力,我们引入了基于评分表的评估方法,允许GPT-4对自己的中间推理步骤进行评分。此外,我们对ARB的符号子集进行人类评估,发现注释者和GPT-4评分表评估分数之间存在令人鼓舞的一致性。
English
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
PDF170December 15, 2024