超越二元獎勵:訓練語言模型以推理其不確定性
Beyond Binary Rewards: Training LMs to Reason About Their Uncertainty
July 22, 2025
作者: Mehul Damani, Isha Puri, Stewart Slocum, Idan Shenfeld, Leshem Choshen, Yoon Kim, Jacob Andreas
cs.AI
摘要
當語言模型(LMs)通過強化學習(RL)訓練以生成自然語言“推理鏈”時,它們在各種困難的問答任務上的表現有所提升。現今,幾乎所有成功的RL應用於推理的案例都使用二元獎勵函數來評估LM輸出的正確性。由於此類獎勵函數不會懲罰猜測或低信心輸出,它們往往會產生無意的副作用,即降低校準度並增加LM在其他問題領域生成錯誤回應(或“幻覺”)的比率。本文介紹了RLCR(帶有校準獎勵的強化學習),這是一種訓練推理模型的方法,它同時提高了準確性和校準信心估計。在RLCR過程中,LMs在推理後生成預測和數值信心估計。它們被訓練以優化一個獎勵函數,該函數將二元正確性分數與Brier分數——一種激勵校準預測的信心估計評分規則——結合起來。我們首先證明,這種獎勵函數(或任何使用有界、適當評分規則的類似獎勵函數)產生的模型,其預測既準確又校準良好。接著,我們展示在各種數據集上,RLCR在域內和域外評估中顯著提高了校準度,且不損失準確性——超越了普通RL訓練和訓練來分配事後信心分數的分類器。雖然普通RL損害校準,但RLCR改善了它。最後,我們證明,在測試時可以利用口頭表達的信心,通過信心加權縮放方法來提高準確性和校準度。我們的結果表明,明確優化校準可以產生更為可靠的推理模型。
English
When language models (LMs) are trained via reinforcement learning (RL) to
generate natural language "reasoning chains", their performance improves on a
variety of difficult question answering tasks. Today, almost all successful
applications of RL for reasoning use binary reward functions that evaluate the
correctness of LM outputs. Because such reward functions do not penalize
guessing or low-confidence outputs, they often have the unintended side-effect
of degrading calibration and increasing the rate at which LMs generate
incorrect responses (or "hallucinate") in other problem domains. This paper
describes RLCR (Reinforcement Learning with Calibration Rewards), an approach
to training reasoning models that jointly improves accuracy and calibrated
confidence estimation. During RLCR, LMs generate both predictions and numerical
confidence estimates after reasoning. They are trained to optimize a reward
function that augments a binary correctness score with a Brier score -- a
scoring rule for confidence estimates that incentivizes calibrated prediction.
We first prove that this reward function (or any analogous reward function that
uses a bounded, proper scoring rule) yields models whose predictions are both
accurate and well-calibrated. We next show that across diverse datasets, RLCR
substantially improves calibration with no loss in accuracy, on both in-domain
and out-of-domain evaluations -- outperforming both ordinary RL training and
classifiers trained to assign post-hoc confidence scores. While ordinary RL
hurts calibration, RLCR improves it. Finally, we demonstrate that verbalized
confidence can be leveraged at test time to improve accuracy and calibration
via confidence-weighted scaling methods. Our results show that explicitly
optimizing for calibration can produce more generally reliable reasoning
models.