Tag-LLM:將通用性LLM重新用於專業領域
Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains
February 6, 2024
作者: Junhong Shen, Neil Tenenholtz, James Brian Hall, David Alvarez-Melis, Nicolo Fusi
cs.AI
摘要
大型語言模型(LLMs)展示了在理解和生成自然語言方面的卓越能力。然而,在預訓練語料庫中未充分代表的高度專業化領域,如物理和生物醫學科學,它們的能力會下降。本研究探討如何重新運用通用LLMs成為專業領域的有效任務解決方案。我們引入了一個新穎的、與模型無關的框架,用於學習自定義輸入標籤,這些標籤被參數化為連續向量,附加到LLM的嵌入層,以對LLM進行條件設置。我們設計了兩種類型的輸入標籤:領域標籤用於界定專業表示(例如化學式),並提供與領域相關的上下文;功能標籤用於表示特定功能(例如預測分子性質),並壓縮功能解決指令。我們制定了一個三階段協議,使用輔助數據和領域知識來學習這些標籤。通過明確區分任務領域和任務功能,我們的方法使得通過輸入標籤的多樣組合實現對未見問題的零-shot泛化成為可能。它還提高了LLM在各種專業領域中的表現,例如預測蛋白質或化學性質,以及建模藥物-靶標相互作用,勝過針對這些任務量身定制的專家模型。
English
Large Language Models (LLMs) have demonstrated remarkable proficiency in
understanding and generating natural language. However, their capabilities wane
in highly specialized domains underrepresented in the pretraining corpus, such
as physical and biomedical sciences. This work explores how to repurpose
general LLMs into effective task solvers for specialized domains. We introduce
a novel, model-agnostic framework for learning custom input tags, which are
parameterized as continuous vectors appended to the LLM's embedding layer, to
condition the LLM. We design two types of input tags: domain tags are used to
delimit specialized representations (e.g., chemical formulas) and provide
domain-relevant context; function tags are used to represent specific functions
(e.g., predicting molecular properties) and compress function-solving
instructions. We develop a three-stage protocol to learn these tags using
auxiliary data and domain knowledge. By explicitly disentangling task domains
from task functions, our method enables zero-shot generalization to unseen
problems through diverse combinations of the input tags. It also boosts LLM's
performance in various specialized domains, such as predicting protein or
chemical properties and modeling drug-target interactions, outperforming expert
models tailored to these tasks.