ChatPaper.aiChatPaper

解耦推理与置信度:基于可验证奖励的强化学习校准机制重建

Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable Rewards

March 10, 2026
作者: Zhengzhao Ma, Xueru Wen, Boxi Cao, Yaojie Lu, Hongyu Lin, Jinglin Yang, Min He, Xianpei Han, Le Sun
cs.AI

摘要

基于可验证奖励的强化学习(RLVR)虽能显著提升大语言模型的推理能力,却存在严重的校准退化问题——模型会对错误答案产生过度自信。现有研究致力于将校准目标直接融入现有优化框架,但我们的理论分析表明,在最大化策略准确率与最小化校准误差的优化目标之间存在根本性的梯度冲突。基于这一发现,我们提出DCPO框架,通过系统化解耦推理与校准目标实现简单而有效的优化。大量实验表明,DCPO在保持与GRPO相当准确率的同时,实现了最优的校准性能,显著缓解了过度自信问题。本研究为构建更可靠的大语言模型部署提供了重要洞见和实用解决方案。
English
Reinforcement Learning from Verifiable Rewards (RLVR) significantly enhances large language models (LLMs) reasoning but severely suffers from calibration degeneration, where models become excessively over-confident in incorrect answers. Previous studies devote to directly incorporating calibration objective into existing optimization target. However, our theoretical analysis demonstrates that there exists a fundamental gradient conflict between the optimization for maximizing policy accuracy and minimizing calibration error. Building on this insight, we propose DCPO, a simple yet effective framework that systematically decouples reasoning and calibration objectives. Extensive experiments demonstrate that our DCPO not only preserves accuracy on par with GRPO but also achieves the best calibration performance and substantially mitigates the over-confidence issue. Our study provides valuable insights and practical solution for more reliable LLM deployment.
PDF31March 12, 2026