ChatPaper.aiChatPaper

無需思考的策略初始化使蒸餾推理模型成為更高效且有效的推理者

Thinking-Free Policy Initialization Makes Distilled Reasoning Models More Effective and Efficient Reasoners

September 30, 2025
作者: Xin Xu, Cliveb AI, Kai Yang, Tianhao Chen, Yang Wang, Saiyong Yang, Can Yang
cs.AI

摘要

可驗證獎勵的強化學習(RLVR)在有效解決複雜任務的同時,於訓練期間要求極長的上下文長度,導致了顯著的計算成本。雖然多階段訓練能在一定程度上緩解這一問題,但從過短的上下文開始往往會造成不可逆的性能下降,最終未能顯著降低整體訓練計算量。本文提出了一種簡單而有效的RLVR適應方法——**無思維策略初始化(TFPI)**,它連接了長思維鏈(CoT)蒸餾與標準RLVR。TFPI採用了一種名為*ThinkFree*的操作,通過直接*</think>*附加來顯式地捨棄思維內容,從而減少推理過程中的令牌使用。使用*ThinkFree*適應後的輸入進行訓練,不僅提升了性能,還降低了令牌消耗,即便在原有的慢思維模式下也是如此。多項基準測試的廣泛實驗表明,TFPI加速了RL的收斂,達到了更高的性能上限,並產生了更為令牌高效的推理模型,而無需專門的獎勵機制或複雜的訓練設計。僅憑TFPI,我們便訓練了一個4B模型,在AIME24上達到了89.0%的準確率,在LiveCodeBench上達到了65.5%的準確率,且使用的H20小時少於4K。
English
Reinforcement Learning with Verifiable Reward (RLVR) effectively solves complex tasks but demands extremely long context lengths during training, leading to substantial computational costs. While multi-stage training can partially mitigate this, starting with overly short contexts often causes irreversible performance degradation, ultimately failing to reduce overall training compute significantly. In this paper, we introduce **T**hinking-**F**ree **P**olicy **I**nitialization (**TFPI**), a simple yet effective adaptation to RLVR that bridges long Chain-of-Thought (CoT) distillation and standard RLVR. TFPI employs a simple *ThinkFree* operation, explicitly discarding the thinking content via a direct *</think>* append, to reduce token usage during inference. Training with *ThinkFree*-adapted inputs improves performance and lowers token consumption, even in the original slow-thinking mode. Extensive experiments across various benchmarks have shown that TFPI accelerates RL convergence, achieves a higher performance ceiling, and yields more token-efficient reasoning models without specialized rewards or complex training designs. With TFPI only, we train a 4B model to reach 89.0% accuracy on AIME24 and 65.5% on LiveCodeBench using less than 4K H20 hours.
PDF261October 1, 2025