RWKV:为Transformer时代重新定义RNNs
RWKV: Reinventing RNNs for the Transformer Era
May 22, 2023
作者: Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, Xuzheng He, Haowen Hou, Przemyslaw Kazienko, Jan Kocon, Jiaming Kong, Bartlomiej Koptyra, Hayden Lau, Krishna Sri Ipsit Mantri, Ferdinand Mom, Atsushi Saito, Xiangru Tang, Bolun Wang, Johan S. Wind, Stansilaw Wozniak, Ruichong Zhang, Zhenyuan Zhang, Qihang Zhao, Peng Zhou, Jian Zhu, Rui-Jie Zhu
cs.AI
摘要
Transformer已经在几乎所有自然语言处理(NLP)任务中引起了革命,但存在着随着序列长度呈二次方增长的内存和计算复杂性。相比之下,循环神经网络(RNNs)在内存和计算需求上呈线性增长,但由于并行化和可扩展性方面的限制,很难达到Transformer相同的性能水平。我们提出了一种新颖的模型架构,名为Receptance Weighted Key Value(RWKV),结合了Transformer的高效可并行化训练和RNN的高效推断。我们的方法利用了线性注意力机制,使我们能够将模型构建为Transformer或RNN,从而在训练过程中并行化计算,并在推断过程中保持恒定的计算和内存复杂性,使其成为首个可扩展到数百亿参数的非Transformer架构。我们的实验显示,RWKV的性能与同等规模的Transformer相当,表明未来的工作可以利用这种架构创建更高效的模型。这项工作在协调序列处理任务中的计算效率和模型性能之间的权衡方面迈出了重要一步。
English
Transformers have revolutionized almost all natural language processing (NLP)
tasks but suffer from memory and computational complexity that scales
quadratically with sequence length. In contrast, recurrent neural networks
(RNNs) exhibit linear scaling in memory and computational requirements but
struggle to match the same performance as Transformers due to limitations in
parallelization and scalability. We propose a novel model architecture,
Receptance Weighted Key Value (RWKV), that combines the efficient
parallelizable training of Transformers with the efficient inference of RNNs.
Our approach leverages a linear attention mechanism and allows us to formulate
the model as either a Transformer or an RNN, which parallelizes computations
during training and maintains constant computational and memory complexity
during inference, leading to the first non-transformer architecture to be
scaled to tens of billions of parameters. Our experiments reveal that RWKV
performs on par with similarly sized Transformers, suggesting that future work
can leverage this architecture to create more efficient models. This work
presents a significant step towards reconciling the trade-offs between
computational efficiency and model performance in sequence processing tasks.