Tag-LLM:将通用LLM重新用于专业领域
Tag-LLM: Repurposing General-Purpose LLMs for Specialized Domains
February 6, 2024
作者: Junhong Shen, Neil Tenenholtz, James Brian Hall, David Alvarez-Melis, Nicolo Fusi
cs.AI
摘要
大型语言模型(LLMs)展示了在理解和生成自然语言方面的显著能力。然而,在预训练语料库中代表性不足的高度专业化领域,如物理和生物医学科学,它们的能力会减弱。本研究探讨了如何重新利用通用LLMs成为专业领域有效的任务求解器。我们引入了一种新颖的、与模型无关的框架,用于学习自定义输入标签,这些标签被参数化为连续向量,附加到LLM的嵌入层,以对LLM进行条件化。我们设计了两种类型的输入标签:领域标签用于界定专业表示(例如化学式)并提供领域相关上下文;功能标签用于表示特定功能(例如预测分子性质)并压缩功能求解指令。我们制定了一个三阶段协议,利用辅助数据和领域知识来学习这些标签。通过明确将任务领域与任务功能分离,我们的方法通过不同的输入标签组合实现了对未见问题的零-shot泛化。它还提高了LLM在各种专业领域的性能,例如预测蛋白质或化学性质以及建模药物靶点相互作用,胜过专门针对这些任务的专家模型。
English
Large Language Models (LLMs) have demonstrated remarkable proficiency in
understanding and generating natural language. However, their capabilities wane
in highly specialized domains underrepresented in the pretraining corpus, such
as physical and biomedical sciences. This work explores how to repurpose
general LLMs into effective task solvers for specialized domains. We introduce
a novel, model-agnostic framework for learning custom input tags, which are
parameterized as continuous vectors appended to the LLM's embedding layer, to
condition the LLM. We design two types of input tags: domain tags are used to
delimit specialized representations (e.g., chemical formulas) and provide
domain-relevant context; function tags are used to represent specific functions
(e.g., predicting molecular properties) and compress function-solving
instructions. We develop a three-stage protocol to learn these tags using
auxiliary data and domain knowledge. By explicitly disentangling task domains
from task functions, our method enables zero-shot generalization to unseen
problems through diverse combinations of the input tags. It also boosts LLM's
performance in various specialized domains, such as predicting protein or
chemical properties and modeling drug-target interactions, outperforming expert
models tailored to these tasks.