大型语言模型在线蒸馏技术综述
A Survey of On-Policy Distillation for Large Language Models
April 1, 2026
作者: Mingyang Song, Mao Zheng
cs.AI
摘要
知识蒸馏已成为将前沿大语言模型的推理能力和领域专长迁移至可部署轻量化学生模型的主要机制。然而,主流范式仍停留在离线策略层面:学生模型在静态的教师生成数据上进行训练,学习过程中从未接触自身错误。这种训练与推理的错位——曝光偏差的典型表现——会导致预测误差在推理时以自回归方式持续累积。在线策略蒸馏通过让学生模型自主生成输出轨迹,并基于这些自生成输出获得教师反馈,将蒸馏过程建立在交互式模仿学习理论基础上,从而解决上述问题。尽管该领域在散度最小化、奖励引导学习和自我博弈等方面快速发展,但相关研究仍处于碎片化状态,缺乏统一的理论框架。本文首次对大语言模型的在线策略蒸馏研究进行全面综述,提出了基于在线策略样本的统一f-散度框架,并从三个正交维度梳理技术脉络:反馈信号类型(基于logit、基于结果或自我博弈)、教师模型访问权限(白盒、黑盒或无教师)以及损失粒度(词元级、序列级或混合型)。我们系统分析了代表性方法,考察了工业部署现状,并指出蒸馏缩放定律、不确定性感知反馈、智能体级蒸馏等亟待解决的关键问题。
English
Knowledge distillation has become a primary mechanism for transferring reasoning and domain expertise from frontier Large Language Models (LLMs) to smaller, deployable students. However, the dominant paradigm remains off-policy: students train on static teacher-generated data and never encounter their own errors during learning. This train--test mismatch, an instance of exposure bias, causes prediction errors to compound autoregressively at inference time. On-Policy Distillation (OPD) addresses this by letting the student generate its own trajectories and receive teacher feedback on these self-generated outputs, grounding distillation in the theory of interactive imitation learning. Despite rapid growth spanning divergence minimization, reward-guided learning, and self-play, the OPD literature remains fragmented with no unified treatment. This survey provides the first comprehensive overview of OPD for LLMs. We introduce a unified f-divergence framework over on-policy samples and organize the landscape along three orthogonal dimensions: feedback signal (logit-based, outcome-based, or self-play), teacher access (white-box, black-box, or teacher-free), and loss granularity (token-level, sequence-level, or hybrid). We systematically analyze representative methods, examine industrial deployments, and identify open problems including distillation scaling laws, uncertainty-aware feedback, and agent-level distillation.