基於評分量表的不可驗證大型語言模型後訓練交替強化學習獎勵建模
Alternating Reinforcement Learning for Rubric-Based Reward Modeling in Non-Verifiable LLM Post-Training
February 2, 2026
作者: Ran Xu, Tianci Liu, Zihan Dong, Tony You, Ilgee Hong, Carl Yang, Linjun Zhang, Tao Zhao, Haoyu Wang
cs.AI
摘要
传统的奖励模型通常只能预测标量分数,难以捕捉不可验证领域(如创意写作或开放式指令遵循)中回复质量的多维特性。为突破这一局限,我们提出Rubric-ARM框架,通过基于偏好的强化学习联合优化评分标准生成器与评判器。与依赖静态评分标准或割裂训练流程的现有方法不同,我们的方法将评分标准生成视为潜在动作,通过优化判断准确性进行学习。针对同步更新导致的非平稳性问题,我们引入交替优化策略,并通过理论分析证明该方案能有效降低训练过程中的梯度方差。大量实验表明,Rubric-ARM在多个基准测试中均优于基线模型,并在离线和在线强化学习场景下显著提升了下游策略对齐效果。
English
Standard reward models typically predict scalar scores that fail to capture the multifaceted nature of response quality in non-verifiable domains, such as creative writing or open-ended instruction following. To address this limitation, we propose Rubric-ARM, a framework that jointly optimizes a rubric generator and a judge using reinforcement learning from preference feedback. Unlike existing methods that rely on static rubrics or disjoint training pipelines, our approach treats rubric generation as a latent action learned to maximize judgment accuracy. We introduce an alternating optimization strategy to mitigate the non-stationarity of simultaneous updates, providing theoretical analysis that demonstrates how this schedule reduces gradient variance during training. Extensive experiments show that Rubric-ARM achieves state-of-the-art performance among baselines on multiple benchmarks and significantly improves downstream policy alignment in both offline and online reinforcement learning settings.