为什么LLM的有效上下文长度不足?
Why Does the Effective Context Length of LLMs Fall Short?
October 24, 2024
作者: Chenxin An, Jun Zhang, Ming Zhong, Lei Li, Shansan Gong, Yao Luo, Jingjing Xu, Lingpeng Kong
cs.AI
摘要
分布式训练和高效的注意力机制的进展显著扩展了大型语言模型(LLMs)的上下文窗口大小。然而,最近的研究表明,开源LLMs的有效上下文长度通常不足,通常不超过其训练长度的一半。在这项工作中,我们将这一限制归因于LLMs预训练和后训练阶段形成的相对位置的左偏频率分布,这阻碍了它们有效收集远距离信息的能力。为了解决这一挑战,我们引入了ShifTed Rotray位置嵌入(STRING)。STRING在推断过程中将训练良好的位置移位以覆盖原始无效位置,提升了它们在现有训练长度内的性能。实验结果显示,STRING在没有额外训练的情况下,显著提高了最新大规模模型(如Llama3.1 70B和Qwen2 72B)在流行的长上下文基准RULER和InfiniteBench上的表现超过10个点,为开源LLMs确立了新的最先进结果。与商业模型相比,即使在没有额外训练的情况下,Llama 3.1 70B与STRING的性能甚至优于GPT-4-128K,并明显优于Claude 2和Kimi-chat。
English
Advancements in distributed training and efficient attention mechanisms have
significantly expanded the context window sizes of large language models
(LLMs). However, recent work reveals that the effective context lengths of
open-source LLMs often fall short, typically not exceeding half of their
training lengths. In this work, we attribute this limitation to the left-skewed
frequency distribution of relative positions formed in LLMs pretraining and
post-training stages, which impedes their ability to effectively gather distant
information. To address this challenge, we introduce ShifTed Rotray position
embeddING (STRING). STRING shifts well-trained positions to overwrite the
original ineffective positions during inference, enhancing performance within
their existing training lengths. Experimental results show that without
additional training, STRING dramatically improves the performance of the latest
large-scale models, such as Llama3.1 70B and Qwen2 72B, by over 10 points on
popular long-context benchmarks RULER and InfiniteBench, establishing new
state-of-the-art results for open-source LLMs. Compared to commercial models,
Llama 3.1 70B with \method even achieves better performance than GPT-4-128K and
clearly surpasses Claude 2 and Kimi-chat.Summary
AI-Generated Summary