ChatPaper.aiChatPaper

解耦推理與置信度:基於可驗證獎勵的強化學習校準機制復興

Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable Rewards

March 10, 2026
作者: Zhengzhao Ma, Xueru Wen, Boxi Cao, Yaojie Lu, Hongyu Lin, Jinglin Yang, Min He, Xianpei Han, Le Sun
cs.AI

摘要

基于可验证奖励的强化学习(RLVR)虽能显著提升大语言模型的推理能力,却存在严重的校准退化问题——模型会对错误答案产生过度自信。现有研究致力于将校准目标直接融入现有优化框架,但我们的理论分析表明,政策精度最大化与校准误差最小化之间存在根本性的梯度冲突。基于这一发现,我们提出DCPO框架,通过系统化解耦推理与校准目标实现高效优化。大量实验表明,DCPO在保持与GRPO相当准确率的同时,实现了最优的校准性能,显著缓解了过度自信问题。本研究为构建更可靠的大语言模型部署提供了理论洞见与实用解决方案。
English
Reinforcement Learning from Verifiable Rewards (RLVR) significantly enhances large language models (LLMs) reasoning but severely suffers from calibration degeneration, where models become excessively over-confident in incorrect answers. Previous studies devote to directly incorporating calibration objective into existing optimization target. However, our theoretical analysis demonstrates that there exists a fundamental gradient conflict between the optimization for maximizing policy accuracy and minimizing calibration error. Building on this insight, we propose DCPO, a simple yet effective framework that systematically decouples reasoning and calibration objectives. Extensive experiments demonstrate that our DCPO not only preserves accuracy on par with GRPO but also achieves the best calibration performance and substantially mitigates the over-confidence issue. Our study provides valuable insights and practical solution for more reliable LLM deployment.
PDF31March 12, 2026