LLM 或许长LM:无需调整即可自我扩展LLM上下文窗口
LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
January 2, 2024
作者: Hongye Jin, Xiaotian Han, Jingfeng Yang, Zhimeng Jiang, Zirui Liu, Chia-Yuan Chang, Huiyuan Chen, Xia Hu
cs.AI
摘要
本研究揭示了大型语言模型(LLMs)在无需微调的情况下处理长文本的固有能力。训练期间训练序列的有限长度可能限制大型语言模型(LLMs)在推理过程中对长输入序列的应用。在本研究中,我们认为现有的LLMs本身具有处理长文本的固有能力。基于这一观点,我们建议通过扩展LLMs的上下文窗口来充分利用这种固有能力。我们提出了自我扩展(Self-Extend)来激发LLMs处理长文本的潜力。基本思想是构建双层注意信息:组级别和邻居级别。这两个级别是通过原始模型的自注意力计算的,这意味着所提出的方法不需要任何训练。只需修改四行代码,所提出的方法就可以轻松地扩展现有LLMs的上下文窗口,而无需任何微调。我们进行了全面的实验,结果表明所提出的方法可以有效地扩展现有LLMs上下文窗口的长度。
English
This work elicits LLMs' inherent ability to handle long contexts without
fine-tuning. The limited length of the training sequence during training may
limit the application of Large Language Models (LLMs) on long input sequences
for inference. In this work, we argue that existing LLMs themselves have
inherent capabilities for handling long contexts. Based on this argument, we
suggest extending LLMs' context window by themselves to fully utilize the
inherent ability.We propose Self-Extend to stimulate LLMs' long context
handling potential. The basic idea is to construct bi-level attention
information: the group level and the neighbor level. The two levels are
computed by the original model's self-attention, which means the proposed does
not require any training. With only four lines of code modification, the
proposed method can effortlessly extend existing LLMs' context window without
any fine-tuning. We conduct comprehensive experiments and the results show that
the proposed method can effectively extend existing LLMs' context window's
length.