ChatPaper.aiChatPaper

論後訓練中監督式微調與強化學習的非解耦性

On the Non-decoupling of Supervised Fine-tuning and Reinforcement Learning in Post-training

January 12, 2026
作者: Xueyan Niu, Bo Bai, Wei Han, Weixi Zhang
cs.AI

摘要

大型語言模型的後訓練流程通常會交替進行監督式微調(SFT)與強化學習(RL)。這兩種方法具有不同目標:SFT旨在最小化模型輸出與專家回答之間的交叉熵損失,而RL則專注於最大化來自人類偏好或規則驗證器的獎勵信號。現代推理模型已廣泛採納交替進行SFT與RL訓練的實踐方式,然而對於兩者能否解耦運作,迄今缺乏理論闡釋。我們證明無論以何種順序都無法實現解耦:(1) SFT後接RL的耦合:在SFT最優性條件下,RL會增加SFT損失;(2) RL後接SFT的耦合:SFT會降低RL已達成的獎勵值。在Qwen3-0.6B上的實驗證實了預測的性能衰退現象,驗證了在後訓練過程中若將SFT與RL分離,必然會導致先前已獲得的性能損失。
English
Post-training of large language models routinely interleaves supervised fine-tuning (SFT) with reinforcement learning (RL). These two methods have different objectives: SFT minimizes the cross-entropy loss between model outputs and expert responses, while RL maximizes reward signals derived from human preferences or rule-based verifiers. Modern reasoning models have widely adopted the practice of alternating SFT and RL training. However, there is no theoretical account of whether they can be decoupled. We prove that decoupling is impossible in either order: (1) SFT-then-RL coupling: RL increases SFT loss under SFT optimality and (2) RL-then-SFT coupling: SFT lowers the reward achieved by RL. Experiments on Qwen3-0.6B confirm the predicted degradation, verifying that SFT and RL cannot be separated without loss of prior performance in the post-training
PDF22January 31, 2026