ChatPaper.aiChatPaper

大型语言模型的知识蒸馏

Knowledge Distillation of Large Language Models

June 14, 2023
作者: Yuxian Gu, Li Dong, Furu Wei, Minlie Huang
cs.AI

摘要

知识蒸馏(KD)是一种有望降低大型语言模型(LLMs)高计算需求的技术。然而,先前的知识蒸馏方法主要应用于白盒分类模型或训练小模型模仿像ChatGPT这样的黑盒模型API。如何有效地从白盒生成式LLMs中提炼知识仍未得到充分探讨,随着LLMs的繁荣,这变得越来越重要。在这项工作中,我们提出了MiniLLM,从生成式较大的语言模型中提炼出更小的语言模型。我们首先将标准KD方法中的前向Kullback-Leibler散度(KLD)目标替换为逆KLD,这对于在生成式语言模型上进行KD更为合适,以防止学生模型高估教师分布的低概率区域。然后,我们推导出一种有效的优化方法来学习这一目标。在遵循指令的设置中进行的大量实验表明,MiniLLM模型生成的响应更准确,整体质量更高,暴露偏差更低,校准性更好,长文本生成性能更高。我们的方法也适用于具有120M至13B参数的不同模型系列。我们将在https://aka.ms/MiniLLM发布我们的代码和模型检查点。
English
Knowledge Distillation (KD) is a promising technique for reducing the high computational demand of large language models (LLMs). However, previous KD methods are primarily applied to white-box classification models or training small models to imitate black-box model APIs like ChatGPT. How to effectively distill the knowledge from white-box generative LLMs is still under-explored, which becomes more and more important with the prosperity of LLMs. In this work, we propose MiniLLM that distills smaller language models from generative larger language models. We first replace the forward Kullback-Leibler divergence (KLD) objective in the standard KD approaches with reverse KLD, which is more suitable for KD on generative language models, to prevent the student model from overestimating the low-probability regions of the teacher distribution. Then, we derive an effective optimization approach to learn this objective. Extensive experiments in the instruction-following setting show that the MiniLLM models generate more precise responses with the higher overall quality, lower exposure bias, better calibration, and higher long-text generation performance. Our method is also scalable for different model families with 120M to 13B parameters. We will release our code and model checkpoints at https://aka.ms/MiniLLM.
PDF200December 15, 2024