ChatPaper.aiChatPaper

无思维策略初始化使蒸馏推理模型成为更高效、更有效的推理器

Thinking-Free Policy Initialization Makes Distilled Reasoning Models More Effective and Efficient Reasoners

September 30, 2025
作者: Xin Xu, Cliveb AI, Kai Yang, Tianhao Chen, Yang Wang, Saiyong Yang, Can Yang
cs.AI

摘要

带有可验证奖励的强化学习(RLVR)能有效解决复杂任务,但在训练过程中需要极长的上下文长度,导致巨大的计算成本。虽然多阶段训练可以部分缓解这一问题,但若从过短的上下文开始,往往会造成不可逆的性能下降,最终无法显著降低整体训练计算量。本文提出了一种简单而有效的RLVR改进方法——**无思维策略初始化(TFPI)**,它在长思维链(CoT)蒸馏与标准RLVR之间架起桥梁。TFPI采用了一种简单的*无思维*操作,通过直接*</think>*附加明确舍弃思维内容,以减少推理时的令牌使用。使用*无思维*调整后的输入进行训练,不仅提升了性能,还降低了令牌消耗,即便在原有的慢速思维模式下也是如此。多项基准测试的广泛实验表明,TFPI加速了RL的收敛,达到了更高的性能上限,并生成了更具令牌效率的推理模型,而无需专门的奖励机制或复杂的训练设计。仅使用TFPI,我们便训练了一个40亿参数的模型,在AIME24上达到89.0%的准确率,在LiveCodeBench上达到65.5%,且消耗的H20小时数不足4千。
English
Reinforcement Learning with Verifiable Reward (RLVR) effectively solves complex tasks but demands extremely long context lengths during training, leading to substantial computational costs. While multi-stage training can partially mitigate this, starting with overly short contexts often causes irreversible performance degradation, ultimately failing to reduce overall training compute significantly. In this paper, we introduce **T**hinking-**F**ree **P**olicy **I**nitialization (**TFPI**), a simple yet effective adaptation to RLVR that bridges long Chain-of-Thought (CoT) distillation and standard RLVR. TFPI employs a simple *ThinkFree* operation, explicitly discarding the thinking content via a direct *</think>* append, to reduce token usage during inference. Training with *ThinkFree*-adapted inputs improves performance and lowers token consumption, even in the original slow-thinking mode. Extensive experiments across various benchmarks have shown that TFPI accelerates RL convergence, achieves a higher performance ceiling, and yields more token-efficient reasoning models without specialized rewards or complex training designs. With TFPI only, we train a 4B model to reach 89.0% accuracy on AIME24 and 65.5% on LiveCodeBench using less than 4K H20 hours.
PDF261October 1, 2025