静心解码:探索性退火解码在可验证强化学习中的应用
Let it Calm: Exploratory Annealed Decoding for Verifiable Reinforcement Learning
October 6, 2025
作者: Chenghao Yang, Lin Gui, Chenxiao Yang, Victor Veitch, Lizhu Zhang, Zhuokai Zhao
cs.AI
摘要
基于可验证奖励的强化学习(RLVR)是提升大型语言模型(LLM)推理能力的有力范式,但其成功关键在于有效的探索策略。理想的探索策略需应对两大核心挑战:在保证样本质量的同时,确保训练的稳定性。尽管标准的固定温度采样方法简单易行,却难以平衡这两者,因为高温会降低样本质量,而低温则限制新发现的产生。本研究提出了一种更为简洁高效的策略——探索性退火解码(EAD),其核心理念在于认识到探索对决定序列语义方向的早期标记最为关键。EAD实施了一种直观的“开头探索,结尾利用”策略,通过在生成过程中从高到低逐步退火采样温度来实现。这一动态调度机制在初期鼓励有意义的高层次多样性,随后逐渐降低温度以保持样本质量,并使采样分布贴近目标策略,这对训练稳定性至关重要。我们证明,EAD作为一种轻量级即插即用方法,显著提升了样本效率,在多种RLVR算法及不同规模的模型中均稳定优于固定温度采样。我们的研究表明,将探索与序列生成的天然动态相协调,为提升LLM推理能力提供了一条稳健的路径。
English
Reinforcement learning with verifiable rewards (RLVR) is a powerful paradigm
for enhancing the reasoning capabilities of large language models (LLMs), yet
its success hinges on effective exploration. An ideal exploration strategy must
navigate two fundamental challenges: it must preserve sample quality while also
ensuring training stability. While standard fixed-temperature sampling is
simple, it struggles to balance these competing demands, as high temperatures
degrade sample quality and low temperatures limit discovery. In this work, we
propose a simpler and more effective strategy, Exploratory Annealed Decoding
(EAD), grounded in the insight that exploration is most impactful on early
tokens which define a sequence's semantic direction. EAD implements an
intuitive **explore-at-the-beginning, exploit-at-the-end** strategy by
annealing the sampling temperature from high to low during generation. This
dynamic schedule encourages meaningful, high-level diversity at the start, then
gradually lowers the temperature to preserve sample quality and keep the
sampling distribution close to the target policy, which is essential for stable
training. We demonstrate that EAD is a lightweight, plug-and-play method that
significantly improves sample efficiency, consistently outperforming
fixed-temperature sampling across various RLVR algorithms and model sizes. Our
work suggests that aligning exploration with the natural dynamics of sequential
generation offers a robust path to improving LLM reasoning.