基于评分标准的交替强化学习在不可验证性大语言模型后训练中的奖励建模
Alternating Reinforcement Learning for Rubric-Based Reward Modeling in Non-Verifiable LLM Post-Training
February 2, 2026
作者: Ran Xu, Tianci Liu, Zihan Dong, Tony You, Ilgee Hong, Carl Yang, Linjun Zhang, Tao Zhao, Haoyu Wang
cs.AI
摘要
传统奖励模型通常预测标量分数,难以捕捉不可验证领域(如创意写作或开放式指令遵循)中回答质量的多维特性。为突破这一局限,我们提出Rubric-ARM框架,通过基于偏好的强化学习联合优化评分细则生成器与评判器。与依赖静态细则或割裂训练流程的现有方法不同,我们的方法将细则生成视为潜在动作,通过最大化评判准确度进行学习。针对同步更新的非平稳性问题,我们引入交替优化策略,并通过理论分析证明该方案能有效降低训练过程中的梯度方差。大量实验表明,Rubric-ARM在多个基准测试中超越基线方法达到最优性能,并在离线和在线强化学习场景下显著提升下游策略对齐效果。
English
Standard reward models typically predict scalar scores that fail to capture the multifaceted nature of response quality in non-verifiable domains, such as creative writing or open-ended instruction following. To address this limitation, we propose Rubric-ARM, a framework that jointly optimizes a rubric generator and a judge using reinforcement learning from preference feedback. Unlike existing methods that rely on static rubrics or disjoint training pipelines, our approach treats rubric generation as a latent action learned to maximize judgment accuracy. We introduce an alternating optimization strategy to mitigate the non-stationarity of simultaneous updates, providing theoretical analysis that demonstrates how this schedule reduces gradient variance during training. Extensive experiments show that Rubric-ARM achieves state-of-the-art performance among baselines on multiple benchmarks and significantly improves downstream policy alignment in both offline and online reinforcement learning settings.