RM-R1:奖励建模即推理
RM-R1: Reward Modeling as Reasoning
May 5, 2025
作者: Xiusi Chen, Gaotang Li, Ziqi Wang, Bowen Jin, Cheng Qian, Yu Wang, Hongru Wang, Yu Zhang, Denghui Zhang, Tong Zhang, Hanghang Tong, Heng Ji
cs.AI
摘要
奖励建模对于将大型语言模型(LLMs)与人类偏好对齐至关重要,尤其是在通过人类反馈强化学习(RLHF)的过程中。为了提供准确的奖励信号,奖励模型(RM)应在打分或做出判断前激发深度思考并进行可解释的推理。然而,现有的RM要么生成不透明的标量分数,要么直接预测优选答案,这使得它们难以整合自然语言批评,因而缺乏可解释性。受近期在推理密集型任务中长链思维(CoT)进展的启发,我们提出并验证了将推理能力融入奖励建模能显著提升RM的可解释性和性能。在本研究中,我们引入了一类新的生成式奖励模型——推理奖励模型(ReasRMs),它将奖励建模视为一项推理任务。我们提出了一个面向推理的训练流程,并训练了一系列ReasRMs,即RM-R1。训练包含两个关键阶段:(1)高质量推理链的蒸馏;(2)使用可验证奖励的强化学习。RM-R1通过自我生成推理轨迹或特定于聊天的评分标准,并据此评估候选响应,从而改进了LLM的生成效果。实证表明,我们的模型在多个综合奖励模型基准测试中达到了生成式RM的顶尖或接近顶尖水平,比更大的开放权重模型(如Llama3.1-405B)和专有模型(如GPT-4o)高出最多13.8%。除了最终性能外,我们还进行了深入的实证分析,以理解成功训练ReasRM的关键要素。为促进未来研究,我们在https://github.com/RM-R1-UIUC/RM-R1发布了六个ReasRM模型及其代码和数据。
English
Reward modeling is essential for aligning large language models (LLMs) with
human preferences, especially through reinforcement learning from human
feedback (RLHF). To provide accurate reward signals, a reward model (RM) should
stimulate deep thinking and conduct interpretable reasoning before assigning a
score or a judgment. However, existing RMs either produce opaque scalar scores
or directly generate the prediction of a preferred answer, making them struggle
to integrate natural language critiques, thus lacking interpretability.
Inspired by recent advances of long chain-of-thought (CoT) on
reasoning-intensive tasks, we hypothesize and validate that integrating
reasoning capabilities into reward modeling significantly enhances RM's
interpretability and performance. In this work, we introduce a new class of
generative reward models -- Reasoning Reward Models (ReasRMs) -- which
formulate reward modeling as a reasoning task. We propose a reasoning-oriented
training pipeline and train a family of ReasRMs, RM-R1. The training consists
of two key stages: (1) distillation of high-quality reasoning chains and (2)
reinforcement learning with verifiable rewards. RM-R1 improves LLM rollouts by
self-generating reasoning traces or chat-specific rubrics and evaluating
candidate responses against them. Empirically, our models achieve
state-of-the-art or near state-of-the-art performance of generative RMs across
multiple comprehensive reward model benchmarks, outperforming much larger
open-weight models (e.g., Llama3.1-405B) and proprietary ones (e.g., GPT-4o) by
up to 13.8%. Beyond final performance, we perform thorough empirical analysis
to understand the key ingredients of successful ReasRM training. To facilitate
future research, we release six ReasRM models along with code and data at
https://github.com/RM-R1-UIUC/RM-R1.Summary
AI-Generated Summary