ChatPaper.aiChatPaper

PLaD:使用虛擬偏好對進行基於偏好的大型語言模型蒸餾

PLaD: Preference-based Large Language Model Distillation with Pseudo-Preference Pairs

June 5, 2024
作者: Rongzhi Zhang, Jiaming Shen, Tianqi Liu, Haorui Wang, Zhen Qin, Feng Han, Jialu Liu, Simon Baumgartner, Michael Bendersky, Chao Zhang
cs.AI

摘要

大型語言模型(LLMs)展現了在各種任務中令人印象深刻的能力,然而其龐大的參數規模限制了它們在資源受限環境中的應用。知識蒸餾(KD)通過將大型教師模型的專業知識轉移給緊湊的學生模型,提供了一個可行的解決方案。然而,傳統的知識蒸餾技術在應用於LLMs時面臨特定挑戰,包括對LLM輸出的受限訪問、顯著的教師-學生容量差距以及遺傳的錯誤校準問題。在這項工作中,我們提出了PLaD,一個新穎的基於偏好的LLM蒸餾框架。PLaD利用教師-學生容量差異生成偽偏好對,其中教師輸出優於學生輸出。然後,PLaD利用排名損失來重新校準學生對序列可能性的估計,這將引導學生將焦點放在理解輸出的相對質量上,而不僅僅是模仿教師。PLaD避免了需要訪問教師LLM內部狀態的需求,解決了學生表達能力的限制問題,並緩解了學生的錯誤校準問題。通過對兩個序列生成任務以及多個LLMs進行廣泛實驗,我們展示了我們提出的PLaD框架的有效性。
English
Large Language Models (LLMs) have exhibited impressive capabilities in various tasks, yet their vast parameter sizes restrict their applicability in resource-constrained settings. Knowledge distillation (KD) offers a viable solution by transferring expertise from large teacher models to compact student models. However, traditional KD techniques face specific challenges when applied to LLMs, including restricted access to LLM outputs, significant teacher-student capacity gaps, and the inherited mis-calibration issue. In this work, we present PLaD, a novel preference-based LLM distillation framework. PLaD exploits the teacher-student capacity discrepancy to generate pseudo-preference pairs where teacher outputs are preferred over student outputs. Then, PLaD leverages a ranking loss to re-calibrate student's estimation of sequence likelihood, which steers the student's focus towards understanding the relative quality of outputs instead of simply imitating the teacher. PLaD bypasses the need for access to teacher LLM's internal states, tackles the student's expressivity limitations, and mitigates the student mis-calibration issue. Through extensive experiments on two sequence generation tasks and with various LLMs, we demonstrate the effectiveness of our proposed PLaD framework.

Summary

AI-Generated Summary

PDF111December 12, 2024