ChatPaper.aiChatPaper

大型语言模型在线蒸馏技术综述

A Survey of On-Policy Distillation for Large Language Models

April 1, 2026
作者: Mingyang Song, Mao Zheng
cs.AI

摘要

知识蒸馏已成为将前沿大语言模型的推理能力和领域专长迁移至可部署轻量级学生模型的主要机制。然而当前主流范式仍局限于离线策略:学生模型在静态的教师生成数据上进行训练,学习过程中从未接触自身错误。这种训练与推理的错位——暴露偏差的一种表现形式——会导致预测误差在推理时以自回归方式不断累积。在线策略蒸馏通过让学生模型生成自身轨迹,并基于这些自生成输出获得教师反馈,将蒸馏过程建立在交互式模仿学习理论基础上,从而解决了这一问题。尽管该领域在散度最小化、奖励引导学习和自我博弈等方面快速发展,但相关研究仍处于碎片化状态,缺乏统一的理论框架。本综述首次为LLM在线策略蒸馏提供系统性概览:我们引入基于在线采样样本的统一f-散度框架,并沿三个正交维度梳理技术版图——反馈信号类型(基于logit、基于结果或自我博弈)、教师访问权限(白盒、黑盒或无教师)以及损失粒度(词元级、序列级或混合级)。我们系统分析了代表性方法,考察工业级部署方案,并指出包括蒸馏缩放定律、不确定性感知反馈和智能体级蒸馏在内的开放性问题。
English
Knowledge distillation has become a primary mechanism for transferring reasoning and domain expertise from frontier Large Language Models (LLMs) to smaller, deployable students. However, the dominant paradigm remains off-policy: students train on static teacher-generated data and never encounter their own errors during learning. This train--test mismatch, an instance of exposure bias, causes prediction errors to compound autoregressively at inference time. On-Policy Distillation (OPD) addresses this by letting the student generate its own trajectories and receive teacher feedback on these self-generated outputs, grounding distillation in the theory of interactive imitation learning. Despite rapid growth spanning divergence minimization, reward-guided learning, and self-play, the OPD literature remains fragmented with no unified treatment. This survey provides the first comprehensive overview of OPD for LLMs. We introduce a unified f-divergence framework over on-policy samples and organize the landscape along three orthogonal dimensions: feedback signal (logit-based, outcome-based, or self-play), teacher access (white-box, black-box, or teacher-free), and loss granularity (token-level, sequence-level, or hybrid). We systematically analyze representative methods, examine industrial deployments, and identify open problems including distillation scaling laws, uncertainty-aware feedback, and agent-level distillation.
PDF41April 3, 2026