ChatPaper.aiChatPaper

透過基於檢索的蒸餾訓練任務專家

Training Task Experts through Retrieval Based Distillation

July 7, 2024
作者: Jiaxin Ge, Xueying Jia, Vijay Viswanathan, Hongyin Luo, Graham Neubig
cs.AI

摘要

為了為專業任務創建可部署模型,其中一個最可靠的方法是獲取足夠高質量的任務特定數據。然而,對於專業任務,往往缺乏這樣的數據集。現有方法通過從大型語言模型(LLMs)中創建此類數據,然後將這些知識提煉到較小的模型中來解決這個問題。然而,這些方法受限於LLMs輸出的質量,並且往往會生成重複或不正確的數據。在這項工作中,我們提出了檢索式蒸餾(ReBase),這是一種首先從豐富的在線來源檢索數據,然後將其轉換為特定領域數據的方法。這種方法極大地增強了數據的多樣性。此外,ReBase生成了“思維鏈”推理,並提煉了LLMs的推理能力。我們在4個基準測試上測試了我們的方法,結果顯示我們的方法在SQuAD上的性能提高了高達7.8%,在MNLI上提高了1.37%,在BigBench-Hard上提高了1.94%。
English
One of the most reliable ways to create deployable models for specialized tasks is to obtain an adequate amount of high-quality task-specific data. However, for specialized tasks, often such datasets do not exist. Existing methods address this by creating such data from large language models (LLMs) and then distilling such knowledge into smaller models. However, these methods are limited by the quality of the LLMs output, and tend to generate repetitive or incorrect data. In this work, we present Retrieval Based Distillation (ReBase), a method that first retrieves data from rich online sources and then transforms them into domain-specific data. This method greatly enhances data diversity. Moreover, ReBase generates Chain-of-Thought reasoning and distills the reasoning capacity of LLMs. We test our method on 4 benchmarks and results show that our method significantly improves performance by up to 7.8% on SQuAD, 1.37% on MNLI, and 1.94% on BigBench-Hard.

Summary

AI-Generated Summary

PDF101November 28, 2024