超越语义的实时对齐奖励模型
Real-Time Aligned Reward Model beyond Semantics
January 30, 2026
作者: Zixuan Huang, Xin Xia, Yuxi Ren, Jianbin Zheng, Xuefeng Xiao, Hongyan Xie, Li Huaqiu, Songshi Liang, Zhongxiang Dai, Fuzhen Zhuang, Jianxin Li, Yikun Ban, Deqing Wang
cs.AI
摘要
基於人類回饋的強化學習(RLHF)是使大型語言模型(LLM)與人類偏好對齊的關鍵技術,但該方法易出現獎勵過度優化問題——策略模型對獎勵模型產生過擬合,利用虛假獎勵模式而非真實捕捉人類意圖。現有緩解方案主要依賴表層語義信息,難以有效應對因策略分佈持續偏移導致的獎勵模型與策略模型失準。這種失準必然引發獎勵差異擴大,加劇獎勵過度優化現象。為突破這些局限,我們提出R2M(實時對齊獎勵模型),一種新型輕量級RLHF框架。R2M突破僅依賴預訓練LLM語義表徵的傳統獎勵模型架構,轉而利用策略模型在強化學習過程中動態演變的隱藏狀態(即策略回饋),實現與策略實時分佈偏移的對齊。本研究為通過實時利用策略模型回饋來提升獎勵模型性能開闢了新路徑。
English
Reinforcement Learning from Human Feedback (RLHF) is a pivotal technique for aligning large language models (LLMs) with human preferences, yet it is susceptible to reward overoptimization, in which policy models overfit to the reward model, exploit spurious reward patterns instead of faithfully capturing human intent. Prior mitigations primarily relies on surface semantic information and fails to efficiently address the misalignment between the reward model (RM) and the policy model caused by continuous policy distribution shifts. This inevitably leads to an increasing reward discrepancy, exacerbating reward overoptimization. To address these limitations, we introduce R2M (Real-Time Aligned Reward Model), a novel lightweight RLHF framework. R2M goes beyond vanilla reward models that solely depend on the semantic representations of a pretrained LLM. Instead, it leverages the evolving hidden states of the policy (namely policy feedback) to align with the real-time distribution shift of the policy during the RL process. This work points to a promising new direction for improving the performance of reward models through real-time utilization of feedback from policy models.