ChatPaper.aiChatPaper

无需关注

Don't Pay Attention

June 12, 2025
作者: Mohammad Hammoud, Devang Acharya
cs.AI

摘要

Transformer已成为大规模语言模型及跨领域下游任务的事实标准。尽管其具备训练并行性等诸多优势,Transformer仍面临关键挑战:无法有效处理超出固定上下文窗口的序列,以及其注意力机制的二次方复杂度。这些挑战重新激发了人们对RNN类架构的兴趣,这类架构在序列长度上具有线性扩展性,并能更好地处理长程依赖关系,尽管其固有的循环特性限制了并行性。本文提出了一种全新的神经基础架构——Avey,它既摒弃了注意力机制,也摆脱了循环结构。Avey由排序器和自回归神经处理器组成,二者协同工作,仅针对序列中任意位置的给定token识别并上下文化最相关的token。具体而言,Avey将序列长度与上下文宽度解耦,从而实现了对任意长度序列的有效处理。实验结果表明,Avey在多种标准短程NLP基准测试中表现优于Transformer,尤其在捕捉长程依赖关系方面表现尤为突出。
English
The Transformer has become the de facto standard for large language models and a wide range of downstream tasks across various domains. Despite its numerous advantages like inherent training parallelism, the Transformer still faces key challenges due to its inability to effectively process sequences beyond a fixed context window and the quadratic complexity of its attention mechanism. These challenges have renewed interest in RNN-like architectures, which offer linear scaling with sequence length and improved handling of long-range dependencies, albeit with limited parallelism due to their inherently recurrent nature. In this paper, we propose Avey, a new neural foundational architecture that breaks away from both attention and recurrence. Avey comprises a ranker and an autoregressive neural processor, which collaboratively identify and contextualize only the most relevant tokens for any given token, regardless of their positions in the sequence. Specifically, Avey decouples sequence length from context width, thus enabling effective processing of arbitrarily long sequences. Experimental results show that Avey compares favorably to the Transformer across a variety of standard short-range NLP benchmarks, while notably excelling at capturing long-range dependencies.
PDF72June 17, 2025