基于规则与模型验证器的缺陷——以数学推理为例的案例研究
Pitfalls of Rule- and Model-based Verifiers -- A Case Study on Mathematical Reasoning
May 28, 2025
作者: Yuzhen Huang, Weihao Zeng, Xingshan Zeng, Qi Zhu, Junxian He
cs.AI
摘要
可信的验证器对于可验证奖励的强化学习(RLVR)的成功至关重要,这是诸如DeepSeek-R1等众多大型推理模型背后的核心方法论。在数学推理等复杂领域中,基于规则的验证器已在先前的研究中被广泛采用,以训练强大的推理模型。然而,这些验证器的可靠性及其对RL训练过程的影响仍鲜为人知。在本研究中,我们以数学推理为例,对多种验证器在静态评估和RL训练场景中进行了全面分析。首先,我们发现当前开源的基于规则的验证器往往无法识别多个常用数学数据集中以不同格式呈现的等价答案,导致不可忽视的假阴性率。这一局限对RL训练性能产生不利影响,并随着策略模型的增强而愈发显著。随后,我们探讨了基于模型的验证器作为解决这些局限的潜在方案。尽管静态评估显示基于模型的验证器实现了显著更高的验证准确率,但进一步的分析和RL训练结果表明,它们极易受到攻击,即错误地将响应中的某些模式分类为正确(即假阳性)。这种脆弱性在策略模型优化过程中被利用,导致奖励被人为夸大。我们的研究结果凸显了基于规则和基于模型的验证器各自固有的独特风险,旨在为开发更稳健的强化学习奖励系统提供有价值的见解。
English
Trustworthy verifiers are essential for the success of reinforcement learning
with verifiable reward (RLVR), which is the core methodology behind various
large reasoning models such as DeepSeek-R1. In complex domains like
mathematical reasoning, rule-based verifiers have been widely adopted in
previous works to train strong reasoning models. However, the reliability of
these verifiers and their impact on the RL training process remain poorly
understood. In this work, we take mathematical reasoning as a case study and
conduct a comprehensive analysis of various verifiers in both static evaluation
and RL training scenarios. First, we find that current open-source rule-based
verifiers often fail to recognize equivalent answers presented in different
formats across multiple commonly used mathematical datasets, resulting in
non-negligible false negative rates. This limitation adversely affects RL
training performance and becomes more pronounced as the policy model gets
stronger. Subsequently, we investigate model-based verifiers as a potential
solution to address these limitations. While the static evaluation shows that
model-based verifiers achieve significantly higher verification accuracy,
further analysis and RL training results imply that they are highly susceptible
to hacking, where they misclassify certain patterns in responses as correct
(i.e., false positives). This vulnerability is exploited during policy model
optimization, leading to artificially inflated rewards. Our findings underscore
the unique risks inherent to both rule-based and model-based verifiers, aiming
to offer valuable insights to develop more robust reward systems in
reinforcement learning.Summary
AI-Generated Summary