LaSeR:基於末位元自我獎勵的強化學習
LaSeR: Reinforcement Learning with Last-Token Self-Rewarding
October 16, 2025
作者: Wenkai Yang, Weijie Liu, Ruobing Xie, Yiju Guo, Lulu Wu, Saiyong Yang, Yankai Lin
cs.AI
摘要
可驗證獎勵的強化學習(RLVR)近期已成為提升大型語言模型(LLMs)推理能力的核心範式。為解決測試時缺乏驗證信號的問題,先前的研究將模型的自我驗證能力訓練納入標準RLVR流程中,從而將推理與驗證能力統一於單一LLM內。然而,以往的做法要求LLM使用兩個獨立的提示模板依次生成解決方案和自我驗證,這大大降低了效率。在本研究中,我們從理論上揭示了自我驗證的RL目標的閉式解可簡化為一個極為簡潔的形式:解決方案的真實推理獎勵等於其最後一個令牌的自我獎勵分數,該分數計算為策略模型在解決方案最後一個令牌處對任何預定令牌分配的下一個令牌對數概率與預先計算常數之差,並以KL係數進行縮放。基於這一洞見,我們提出了LaSeR(基於最後令牌自我獎勵的強化學習),該算法僅需在原始RLVR損失上增加一個均方誤差損失,使最後令牌的自我獎勵分數與基於驗證器的推理獎勵對齊,從而聯合優化LLMs的推理與自我獎勵能力。優化後的自我獎勵分數可在訓練和測試中均用於提升模型性能。值得注意的是,我們的算法從生成後立即預測的最後一個令牌的下一個令牌概率分佈中推導這些分數,僅需額外進行一次令牌推理的最小成本。實驗表明,我們的方法不僅提升了模型的推理性能,還賦予其顯著的自我獎勵能力,從而增強了其在推理時的擴展性能。
English
Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as
a core paradigm for enhancing the reasoning capabilities of Large Language
Models (LLMs). To address the lack of verification signals at test time, prior
studies incorporate the training of model's self-verification capability into
the standard RLVR process, thereby unifying reasoning and verification
capabilities within a single LLM. However, previous practice requires the LLM
to sequentially generate solutions and self-verifications using two separate
prompt templates, which significantly reduces efficiency. In this work, we
theoretically reveal that the closed-form solution to the RL objective of
self-verification can be reduced to a remarkably simple form: the true
reasoning reward of a solution is equal to its last-token self-rewarding score,
which is computed as the difference between the policy model's next-token
log-probability assigned to any pre-specified token at the solution's last
token and a pre-calculated constant, scaled by the KL coefficient. Based on
this insight, we propose LaSeR (Reinforcement Learning with Last-Token
Self-Rewarding), an algorithm that simply augments the original RLVR loss with
a MSE loss that aligns the last-token self-rewarding scores with verifier-based
reasoning rewards, jointly optimizing the reasoning and self-rewarding
capabilities of LLMs. The optimized self-rewarding scores can be utilized in
both training and testing to enhance model performance. Notably, our algorithm
derives these scores from the predicted next-token probability distribution of
the last token immediately after generation, incurring only the minimal extra
cost of one additional token inference. Experiments show that our method not
only improves the model's reasoning performance but also equips it with
remarkable self-rewarding capability, thereby boosting its inference-time
scaling performance.