ChatPaper.aiChatPaper

在一堆1000万个草垛中寻找针:循环记忆找到了大型语言模型错过的内容

In Search of Needles in a 10M Haystack: Recurrent Memory Finds What LLMs Miss

February 16, 2024
作者: Yuri Kuratov, Aydar Bulatov, Petr Anokhin, Dmitry Sorokin, Artyom Sorokin, Mikhail Burtsev
cs.AI

摘要

本文讨论了使用生成式变压器模型处理长文档的挑战。为了评估不同方法,我们引入了BABILong,这是一个新的基准,旨在评估模型在提取和处理广泛文本中的分布式事实方面的能力。我们的评估包括GPT-4和RAG的基准测试,结果显示常见方法仅适用于最多10^4个元素的序列。相比之下,通过对GPT-2进行微调并使用循环记忆增强,使其能够处理包含最多10^7个元素的任务。这一成就标志着一个重大飞跃,因为这是迄今为止任何开放神经网络模型处理的最长输入,显示了在处理长序列方面的显著改进能力。
English
This paper addresses the challenge of processing long documents using generative transformer models. To evaluate different approaches, we introduce BABILong, a new benchmark designed to assess model capabilities in extracting and processing distributed facts within extensive texts. Our evaluation, which includes benchmarks for GPT-4 and RAG, reveals that common methods are effective only for sequences up to 10^4 elements. In contrast, fine-tuning GPT-2 with recurrent memory augmentations enables it to handle tasks involving up to 10^7 elements. This achievement marks a substantial leap, as it is by far the longest input processed by any open neural network model to date, demonstrating a significant improvement in the processing capabilities for long sequences.
PDF438December 15, 2024