大型语言模型中的注意力头机制:综述研究
Attention Heads of Large Language Models: A Survey
September 5, 2024
作者: Zifan Zheng, Yezhaohui Wang, Yuxin Huang, Shichao Song, Bo Tang, Feiyu Xiong, Zhiyu Li
cs.AI
摘要
自ChatGPT问世以来,大型语言模型(LLMs)在多项任务中表现卓越,但其内部机制仍如黑箱般难以窥探。因此,模型的发展主要依赖数据驱动的方法,这限制了通过调整内部架构和推理路径来提升性能的可能性。鉴于此,众多研究者开始深入探索LLMs的潜在内部机制,旨在揭示其推理瓶颈的本质,其中大部分研究聚焦于注意力头。本综述旨在通过关注注意力头的可解释性及其底层机制,阐明LLMs的内部推理过程。我们首先将人类思维过程提炼为一个四阶段框架:知识回忆、上下文识别、潜在推理和表达准备。基于此框架,我们系统回顾了现有研究,识别并分类了特定注意力头的功能。此外,我们总结了发现这些特殊头的实验方法,将其分为两类:无需建模的方法和需要建模的方法。同时,我们概述了相关的评估方法和基准。最后,我们讨论了当前研究的局限性,并提出了几个潜在的未来研究方向。我们的参考文献列表已开源,地址为https://github.com/IAAR-Shanghai/Awesome-Attention-Heads。
English
Since the advent of ChatGPT, Large Language Models (LLMs) have excelled in
various tasks but remain largely as black-box systems. Consequently, their
development relies heavily on data-driven approaches, limiting performance
enhancement through changes in internal architecture and reasoning pathways. As
a result, many researchers have begun exploring the potential internal
mechanisms of LLMs, aiming to identify the essence of their reasoning
bottlenecks, with most studies focusing on attention heads. Our survey aims to
shed light on the internal reasoning processes of LLMs by concentrating on the
interpretability and underlying mechanisms of attention heads. We first distill
the human thought process into a four-stage framework: Knowledge Recalling,
In-Context Identification, Latent Reasoning, and Expression Preparation. Using
this framework, we systematically review existing research to identify and
categorize the functions of specific attention heads. Furthermore, we summarize
the experimental methodologies used to discover these special heads, dividing
them into two categories: Modeling-Free methods and Modeling-Required methods.
Also, we outline relevant evaluation methods and benchmarks. Finally, we
discuss the limitations of current research and propose several potential
future directions. Our reference list is open-sourced at
https://github.com/IAAR-Shanghai/Awesome-Attention-Heads.