ChatPaper.aiChatPaper

超越语义的实时对齐奖励模型

Real-Time Aligned Reward Model beyond Semantics

January 30, 2026
作者: Zixuan Huang, Xin Xia, Yuxi Ren, Jianbin Zheng, Xuefeng Xiao, Hongyan Xie, Li Huaqiu, Songshi Liang, Zhongxiang Dai, Fuzhen Zhuang, Jianxin Li, Yikun Ban, Deqing Wang
cs.AI

摘要

基于人类反馈的强化学习(RLHF)是使大语言模型(LLM)与人类偏好对齐的关键技术,但其易受奖励过优化影响,即策略模型对奖励模型产生过拟合,利用虚假奖励模式而非准确捕捉人类意图。现有缓解方法主要依赖表层语义信息,难以有效解决由策略分布持续偏移导致的奖励模型与策略模型失准问题,这不可避免地引发奖励差异扩大,加剧奖励过优化。针对这些局限,我们提出R2M(实时对齐奖励模型)——一种新型轻量级RLHF框架。R2M突破仅依赖预训练LLM语义表征的传统奖励模型范式,通过利用策略模型动态演化的隐状态(即策略反馈)来实现与强化学习过程中策略实时分布偏移的对齐。本研究为通过实时利用策略模型反馈提升奖励模型性能开辟了新方向。
English
Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique for aligning large language models (LLMs) with human preferences, yet it is susceptible to reward overoptimization, in which policy models overfit to the reward model, exploit spurious reward patterns instead of faithfully capturing human intent. Prior mitigations primarily relies on surface semantic information and fails to efficiently address the misalignment between the reward model (RM) and the policy model caused by continuous policy distribution shifts. This inevitably leads to an increasing reward discrepancy, exacerbating reward overoptimization. To address these limitations, we introduce R2M (Real-Time Aligned Reward Model), a novel lightweight RLHF framework. R2M goes beyond vanilla reward models that solely depend on the semantic representations of a pretrained LLM. Instead, it leverages the evolving hidden states of the policy (namely policy feedback) to align with the real-time distribution shift of the policy during the RL process. This work points to a promising new direction for improving the performance of reward models through real-time utilization of feedback from policy models.
PDF42February 3, 2026