RM-R1:獎勵建模作為推理
RM-R1: Reward Modeling as Reasoning
May 5, 2025
作者: Xiusi Chen, Gaotang Li, Ziqi Wang, Bowen Jin, Cheng Qian, Yu Wang, Hongru Wang, Yu Zhang, Denghui Zhang, Tong Zhang, Hanghang Tong, Heng Ji
cs.AI
摘要
獎勵建模對於將大型語言模型(LLMs)與人類偏好對齊至關重要,尤其是通過基於人類反饋的強化學習(RLHF)。為了提供準確的獎勵信號,獎勵模型(RM)應在評分或判斷前激發深度思考並進行可解釋的推理。然而,現有的RM要么生成不透明的標量分數,要么直接生成首選答案的預測,使其難以整合自然語言批評,從而缺乏可解釋性。受近期在推理密集型任務中長鏈思維(CoT)進展的啟發,我們假設並驗證了將推理能力整合到獎勵建模中能顯著提升RM的可解釋性和性能。在本研究中,我們引入了一類新的生成式獎勵模型——推理獎勵模型(ReasRMs),其將獎勵建模視為一項推理任務。我們提出了一種面向推理的訓練流程,並訓練了一系列ReasRMs,即RM-R1。訓練包含兩個關鍵階段:(1)高質量推理鏈的蒸餾和(2)帶有可驗證獎勵的強化學習。RM-R1通過自生成推理軌跡或特定於聊天的評分標準,並根據這些標準評估候選響應,從而改進了LLM的展開。實證表明,我們的模型在多個綜合獎勵模型基準測試中達到了生成式RM的頂尖或接近頂尖性能,超越了更大規模的開放權重模型(如Llama3.1-405B)和專有模型(如GPT-4o),最高提升達13.8%。除了最終性能外,我們還進行了深入的實證分析,以理解成功ReasRM訓練的關鍵要素。為促進未來研究,我們在https://github.com/RM-R1-UIUC/RM-R1發布了六個ReasRM模型及相關代碼和數據。
English
Reward modeling is essential for aligning large language models (LLMs) with
human preferences, especially through reinforcement learning from human
feedback (RLHF). To provide accurate reward signals, a reward model (RM) should
stimulate deep thinking and conduct interpretable reasoning before assigning a
score or a judgment. However, existing RMs either produce opaque scalar scores
or directly generate the prediction of a preferred answer, making them struggle
to integrate natural language critiques, thus lacking interpretability.
Inspired by recent advances of long chain-of-thought (CoT) on
reasoning-intensive tasks, we hypothesize and validate that integrating
reasoning capabilities into reward modeling significantly enhances RM's
interpretability and performance. In this work, we introduce a new class of
generative reward models -- Reasoning Reward Models (ReasRMs) -- which
formulate reward modeling as a reasoning task. We propose a reasoning-oriented
training pipeline and train a family of ReasRMs, RM-R1. The training consists
of two key stages: (1) distillation of high-quality reasoning chains and (2)
reinforcement learning with verifiable rewards. RM-R1 improves LLM rollouts by
self-generating reasoning traces or chat-specific rubrics and evaluating
candidate responses against them. Empirically, our models achieve
state-of-the-art or near state-of-the-art performance of generative RMs across
multiple comprehensive reward model benchmarks, outperforming much larger
open-weight models (e.g., Llama3.1-405B) and proprietary ones (e.g., GPT-4o) by
up to 13.8%. Beyond final performance, we perform thorough empirical analysis
to understand the key ingredients of successful ReasRM training. To facilitate
future research, we release six ReasRM models along with code and data at
https://github.com/RM-R1-UIUC/RM-R1.Summary
AI-Generated Summary