ChatPaper.aiChatPaper

奖励推理模型

Reward Reasoning Model

May 20, 2025
作者: Jiaxin Guo, Zewen Chi, Li Dong, Qingxiu Dong, Xun Wu, Shaohan Huang, Furu Wei
cs.AI

摘要

奖励模型在引导大型语言模型生成符合人类期望的输出方面发挥着关键作用。然而,如何有效利用测试时的计算资源来提升奖励模型性能仍是一个开放性的挑战。在本研究中,我们提出了奖励推理模型(RRMs),该模型专门设计用于在生成最终奖励之前执行深思熟虑的推理过程。通过思维链推理,RRMs针对那些适当奖励并不显而易见的复杂查询,充分利用额外的测试时计算资源。为了开发RRMs,我们实施了一个强化学习框架,该框架能够培养自我进化的奖励推理能力,而无需依赖明确的推理轨迹作为训练数据。实验结果表明,RRMs在跨多个领域的奖励建模基准测试中均取得了卓越的性能。尤为重要的是,我们展示了RRMs能够自适应地利用测试时计算资源,从而进一步提升奖励的准确性。预训练的奖励推理模型已发布于https://huggingface.co/Reward-Reasoning。
English
Reward models play a critical role in guiding large language models toward outputs that align with human expectations. However, an open challenge remains in effectively utilizing test-time compute to enhance reward model performance. In this work, we introduce Reward Reasoning Models (RRMs), which are specifically designed to execute a deliberate reasoning process before generating final rewards. Through chain-of-thought reasoning, RRMs leverage additional test-time compute for complex queries where appropriate rewards are not immediately apparent. To develop RRMs, we implement a reinforcement learning framework that fosters self-evolved reward reasoning capabilities without requiring explicit reasoning traces as training data. Experimental results demonstrate that RRMs achieve superior performance on reward modeling benchmarks across diverse domains. Notably, we show that RRMs can adaptively exploit test-time compute to further improve reward accuracy. The pretrained reward reasoning models are available at https://huggingface.co/Reward-Reasoning.

Summary

AI-Generated Summary

PDF141May 21, 2025