ChatPaper.aiChatPaper

利用開放知識來提升大型語言模型在任務專業上的能力。

Leveraging Open Knowledge for Advancing Task Expertise in Large Language Models

August 28, 2024
作者: Yuncheng Yang, Yulei Qin, Tong Wu, Zihan Xu, Gang Li, Pengcheng Guo, Hang Shao, Yucheng Shi, Ke Li, Xing Sun, Jie Yang, Yun Gu
cs.AI

摘要

為了培養大型語言模型(LLMs)的專業知識以解決特定領域任務,通常需要進行特定調整,以校準預期穩定輸出的行為。為了避免手動準備數百小時的指導數據集和培訓資源所帶來的巨大成本,利用包括豐富的低秩適應(LoRA)模型和指導數據集在內的開放知識成為一個良好的起點。然而,現有的模型和數據選擇方法著重於通用功能的性能,而忽略了在特定領域部署中暴露出的知識差距。在本研究中,我們提出通過引入少量人工標註樣本(即K-shot)來提升LLMs任務專業知識的開放知識,以彌合這種差距。具體而言,我們開發了一個高效且可擴展的流程,以成本效益地生成任務專家,其中K-shot數據介入選擇最有潛力的專家候選人和與任務相關的指導。我們建立了一個專家混合系統(MoE),以充分利用多個專家之間的個別但互補的知識。我們揭示了MoE系統成功的兩個關鍵,即1)遵循K-shot,和2)堅持多樣性。對於前者,我們確保選擇真正具有K-shot問題解決能力的模型,而不是那些盲目猜測者。此外,在數據選擇期間,與K-shot共享任務相關上下文的指導被優先考慮。對於後者,我們強調構成專家的多樣性以及在整個模型和數據選擇過程中微調指導的多樣性。廣泛的實驗結果證實了我們的方法在各種任務中利用開放知識方面優於現有方法。代碼和模型將稍後發布。
English
The cultivation of expertise for large language models (LLMs) to solve tasks of specific areas often requires special-purpose tuning with calibrated behaviors on the expected stable outputs. To avoid huge cost brought by manual preparation of instruction datasets and training resources up to hundreds of hours, the exploitation of open knowledge including a wealth of low rank adaptation (LoRA) models and instruction datasets serves as a good starting point. However, existing methods on model and data selection focus on the performance of general-purpose capabilities while neglecting the knowledge gap exposed in domain-specific deployment. In the present study, we propose to bridge such gap by introducing few human-annotated samples (i.e., K-shot) for advancing task expertise of LLMs with open knowledge. Specifically, we develop an efficient and scalable pipeline to cost-efficiently produce task experts where K-shot data intervene in selecting the most promising expert candidates and the task-relevant instructions. A mixture-of-expert (MoE) system is built to make the best use of individual-yet-complementary knowledge between multiple experts. We unveil the two keys to the success of a MoE system, 1) the abidance by K-shot, and 2) the insistence on diversity. For the former, we ensure that models that truly possess problem-solving abilities on K-shot are selected rather than those blind guessers. Besides, during data selection, instructions that share task-relevant contexts with K-shot are prioritized. For the latter, we highlight the diversity of constituting experts and that of the fine-tuning instructions throughout the model and data selection process. Extensive experimental results confirm the superiority of our approach over existing methods on utilization of open knowledge across various tasks. Codes and models will be released later.

Summary

AI-Generated Summary

PDF204November 16, 2024