聚焦Transformer:对比训练用于上下文缩放
Focused Transformer: Contrastive Training for Context Scaling
July 6, 2023
作者: Szymon Tworkowski, Konrad Staniszewski, Mikołaj Pacek, Yuhuai Wu, Henryk Michalewski, Piotr Miłoś
cs.AI
摘要
大型语言模型具有出色的能力以一种上下文方式整合新信息。然而,这种方法的全部潜力通常受到有效上下文长度的限制。解决这个问题的一个方法是赋予注意力层访问外部存储器的能力,该存储器由(键,值)对组成。然而,随着文档数量的增加,相关键与不相关键的比例减少,导致模型更多地关注不相关键。我们确定了一个重要挑战,称为分心问题,其中与不同语义值相关联的键可能重叠,使它们难以区分。为了解决这个问题,我们引入了Focused Transformer(FoT),这是一种采用对比学习启发的训练过程的技术。这种新颖方法增强了(键,值)空间的结构,使上下文长度得以延伸。我们的方法允许微调现有的大规模模型,以延长其有效上下文。通过我们对3B和7B OpenLLaMA检查点的微调,我们证明了这一点。产生的模型,我们称之为LongLLaMA,在需要长上下文的任务中展现出进展。我们进一步说明,我们的LongLLaMA模型能够熟练地管理256k上下文长度以进行通行证检索。
English
Large language models have an exceptional capability to incorporate new
information in a contextual manner. However, the full potential of such an
approach is often restrained due to a limitation in the effective context
length. One solution to this issue is to endow an attention layer with access
to an external memory, which comprises of (key, value) pairs. Yet, as the
number of documents increases, the proportion of relevant keys to irrelevant
ones decreases, leading the model to focus more on the irrelevant keys. We
identify a significant challenge, dubbed the distraction issue, where keys
linked to different semantic values might overlap, making them hard to
distinguish. To tackle this problem, we introduce the Focused Transformer
(FoT), a technique that employs a training process inspired by contrastive
learning. This novel approach enhances the structure of the (key, value) space,
enabling an extension of the context length. Our method allows for fine-tuning
pre-existing, large-scale models to lengthen their effective context. This is
demonstrated by our fine-tuning of 3B and 7B OpenLLaMA checkpoints. The
resulting models, which we name LongLLaMA, exhibit advancements in tasks
requiring a long context. We further illustrate that our LongLLaMA models
adeptly manage a 256 k context length for passkey retrieval.