VerifyBench:面向大型语言模型的基于参考的奖励系统基准测试
VerifyBench: Benchmarking Reference-based Reward Systems for Large Language Models
May 21, 2025
作者: Yuchen Yan, Jin Jiang, Zhenbang Ren, Yijun Li, Xudong Cai, Yang Liu, Xin Xu, Mengdi Zhang, Jian Shao, Yongliang Shen, Jun Xiao, Yueting Zhuang
cs.AI
摘要
诸如OpenAI o1和DeepSeek-R1等大型推理模型在推理领域取得了显著成就。其训练过程中的一个关键要素在于将可验证奖励机制融入强化学习(RL)框架。然而,现有的奖励基准并未对基于参考的奖励系统进行评估,导致研究人员对RL中所用验证器的准确性理解有限。本文中,我们引入了两个基准——VerifyBench与VerifyBench-Hard,旨在评估基于参考的奖励系统的性能。这些基准通过细致的数据收集与整理构建,并辅以精心的人工标注以确保高质量。当前模型在VerifyBench与VerifyBench-Hard上,尤其是规模较小的模型,仍显示出较大的改进空间。此外,我们对评估结果进行了全面深入的分析,为理解与开发基于参考的奖励系统提供了洞见。我们提出的基准作为有效工具,不仅指导了验证器准确性的提升,也促进了通过RL训练的模型在推理任务中推理能力的发展。
English
Large reasoning models such as OpenAI o1 and DeepSeek-R1 have achieved
remarkable performance in the domain of reasoning. A key component of their
training is the incorporation of verifiable rewards within reinforcement
learning (RL). However, existing reward benchmarks do not evaluate
reference-based reward systems, leaving researchers with limited understanding
of the accuracy of verifiers used in RL. In this paper, we introduce two
benchmarks, VerifyBench and VerifyBench-Hard, designed to assess the
performance of reference-based reward systems. These benchmarks are constructed
through meticulous data collection and curation, followed by careful human
annotation to ensure high quality. Current models still show considerable room
for improvement on both VerifyBench and VerifyBench-Hard, especially
smaller-scale models. Furthermore, we conduct a thorough and comprehensive
analysis of evaluation results, offering insights for understanding and
developing reference-based reward systems. Our proposed benchmarks serve as
effective tools for guiding the development of verifier accuracy and the
reasoning capabilities of models trained via RL in reasoning tasks.Summary
AI-Generated Summary