DDK:精煉領域知識以提升大型語言模型效率
DDK: Distilling Domain Knowledge for Efficient Large Language Models
July 23, 2024
作者: Jiaheng Liu, Chenchen Zhang, Jinyang Guo, Yuanxing Zhang, Haoran Que, Ken Deng, Zhiqi Bai, Jie Liu, Ge Zhang, Jiakai Wang, Yanan Wu, Congnan Liu, Wenbo Su, Jiamang Wang, Lin Qu, Bo Zheng
cs.AI
摘要
儘管大型語言模型(LLMs)在各種應用中擁有先進的智能能力,但它們仍然面臨著重大的計算和存儲需求。知識蒸餾(KD)已經成為一種有效的策略,通過從高性能的LLM(即教師模型)轉移知識來提高較小LLM(即學生模型)的性能。LLM蒸餾中流行的技術通常使用黑盒模型API生成高質量的預訓練和對齊數據集,或者利用白盒蒸餾通過改變損失函數來更好地從教師LLM轉移知識。然而,這些方法忽略了學生和教師LLMs之間跨領域知識差異,這導致過度關注在性能差距較小的領域上,而對性能差距較大的領域則關注不足,降低了整體性能。在本文中,我們介紹了一種名為DDK的新型LLM蒸餾框架,根據教師和學生模型之間的領域性能差異,動態調整蒸餾數據集的組成,使蒸餾過程更穩定和有效。廣泛的評估顯示,DDK顯著提高了學生模型的性能,遠遠優於持續預訓練基線和現有的知識蒸餾方法。
English
Despite the advanced intelligence abilities of large language models (LLMs)
in various applications, they still face significant computational and storage
demands. Knowledge Distillation (KD) has emerged as an effective strategy to
improve the performance of a smaller LLM (i.e., the student model) by
transferring knowledge from a high-performing LLM (i.e., the teacher model).
Prevailing techniques in LLM distillation typically use a black-box model API
to generate high-quality pretrained and aligned datasets, or utilize white-box
distillation by altering the loss function to better transfer knowledge from
the teacher LLM. However, these methods ignore the knowledge differences
between the student and teacher LLMs across domains. This results in excessive
focus on domains with minimal performance gaps and insufficient attention to
domains with large gaps, reducing overall performance. In this paper, we
introduce a new LLM distillation framework called DDK, which dynamically
adjusts the composition of the distillation dataset in a smooth manner
according to the domain performance differences between the teacher and student
models, making the distillation process more stable and effective. Extensive
evaluations show that DDK significantly improves the performance of student
models, outperforming both continuously pretrained baselines and existing
knowledge distillation methods by a large margin.Summary
AI-Generated Summary