告诉你的模型要关注哪里:LLM的事后注意力引导
Tell Your Model Where to Attend: Post-hoc Attention Steering for LLMs
November 3, 2023
作者: Qingru Zhang, Chandan Singh, Liyuan Liu, Xiaodong Liu, Bin Yu, Jianfeng Gao, Tuo Zhao
cs.AI
摘要
在人类撰写的文章中,我们经常利用文本样式的微妙之处,比如加粗和斜体,来引导读者的注意力。这些文本强调对于读者理解传达的信息至关重要。当与大型语言模型(LLMs)交互时,我们有类似的需求 - 引导模型更加关注用户指定的信息,例如指令。然而,现有方法受限于处理纯文本,不支持这样的机制。这促使我们引入PASTA - 后续注意力引导方法,一种允许LLMs阅读带有用户指定强调标记的文本的方法。为此,PASTA识别出一小部分注意力头,并对它们应用精确的注意力重加权,将模型的注意力引导到用户指定的部分。类似提示,PASTA应用于推理时期,不需要更改任何模型参数。实验证明,PASTA能够显著增强LLMs遵循用户指令或整合用户输入的新知识的能力,从而在各种任务上实现显著的性能提升,例如在LLAMA-7B上平均准确率提高了22%。我们的代码可以在https://github.com/QingruZhang/PASTA 公开获取。
English
In human-written articles, we often leverage the subtleties of text style,
such as bold and italics, to guide the attention of readers. These textual
emphases are vital for the readers to grasp the conveyed information. When
interacting with large language models (LLMs), we have a similar need -
steering the model to pay closer attention to user-specified information, e.g.,
an instruction. Existing methods, however, are constrained to process plain
text and do not support such a mechanism. This motivates us to introduce PASTA
- Post-hoc Attention STeering Approach, a method that allows LLMs to read text
with user-specified emphasis marks. To this end, PASTA identifies a small
subset of attention heads and applies precise attention reweighting on them,
directing the model attention to user-specified parts. Like prompting, PASTA is
applied at inference time and does not require changing any model parameters.
Experiments demonstrate that PASTA can substantially enhance an LLM's ability
to follow user instructions or integrate new knowledge from user inputs,
leading to a significant performance improvement on a variety of tasks, e.g.,
an average accuracy improvement of 22% for LLAMA-7B. Our code is publicly
available at https://github.com/QingruZhang/PASTA .