ChatPaper.aiChatPaper

同策略蒸餾的多種面貌:陷阱、機制與修正方法

The Many Faces of On-Policy Distillation: Pitfalls, Mechanisms, and Fixes

May 11, 2026
作者: Siqi Zhu, Xuyan Ye, Hongyu Lu, Weiye Shi, Ge Liu
cs.AI

摘要

同策略蒸餾(OPD)與同策略自蒸餾(OPSD)已成為大型語言模型極具前景的後訓練方法,其能針對模型自身策略所取樣的軌跡,提供密集的令牌層級監督訊號。然而,現有關於其有效性的研究結果仍不一致:儘管OP(S)D在系統提示與知識內化方面展現潛力,近期研究亦指出其存在不穩定性與效能退化問題。本研究針對OPD與OPSD何時有效、何時失效及其原因,進行全面實證分析。我們發現,數學推理任務中的OPD對教師模型選擇與損失函數形式高度敏感;而在我們測試的設定中,由於測試階段缺乏實例特定的特權資訊(PI),OPSD表現不佳。相較之下,當PI代表共享的潛在規則(如系統提示或對齊偏好)時,OPSD則成效顯著。我們歸納出三種失效機制:(1) 因以學生生成的前綴為條件,導致教師與學生之間產生分布不匹配;(2) 來自偏斜的TopK逆向KL散度梯度所引發的最佳化不穩定性;(3) OPSD特有的限制:學生學到的無PI策略僅是聚合以PI為條件的教師,當PI具有實例特定性時,此策略便有所不足。我們進一步證明,停止梯度的TopK目標函數、經RLVR調適的教師模型,以及經SFT穩定化的學生模型,可緩解上述失效問題。
English
On-policy distillation (OPD) and on-policy self-distillation (OPSD) have emerged as promising post-training methods for large language models, offering dense token-level supervision on trajectories sampled from the model's own policy. However, existing results on their effectiveness remain mixed: while OP(S)D has shown promise in system prompt and knowledge internalization, recent studies also report instability and degradation. In this work, we present a comprehensive empirical study of when OPD and OPSD work, when they fail, and why. We find that OPD on mathematical reasoning is highly sensitive to teacher choice and loss formulation, whereas OPSD fails in our tested settings due to test-time absence of instance-specific privileged information (PI). In contrast, OPSD is effective when PI represents a shared latent rule, such as a system prompt or alignment preference. We identify three failure mechanisms: (1) distribution mismatch between teacher and student caused by conditioning on student-generated prefixes, (2) optimization instability from biased TopK reverse-KL gradients, and (3) an OPSD-specific limitation where the student learns a PI-free policy that aggregates PI-conditioned teachers, which is insufficient when PI is instance-specific. We further show that stop-gradient TopK objectives, RLVR-adapted teachers, and SFT-stabilized students mitigate these failures.
PDF41May 14, 2026