基于块的并行Transformer用于长上下文大模型
Blockwise Parallel Transformer for Long Context Large Models
May 30, 2023
作者: Hao Liu, Pieter Abbeel
cs.AI
摘要
Transformer已经成为最先进的自然语言处理模型的基石,展示出在各种人工智能应用中卓越的性能。然而,Transformer中的自注意机制和大型前馈网络提出的内存需求限制了它们处理长序列的能力,从而为涉及多个长序列或长期依赖的任务带来挑战。我们提出了一种独特的方法,即分块并行Transformer(BPT),它利用分块计算自注意力和前馈网络融合来最小化内存成本。通过处理更长的输入序列同时保持内存效率,BPT使得训练序列的长度可达到普通Transformer的32倍,并且比以往的内存高效方法长2到4倍。对语言建模和强化学习任务的大量实验表明,BPT在减少内存需求和提高性能方面的有效性。
English
Transformers have emerged as the cornerstone of state-of-the-art natural
language processing models, showcasing exceptional performance across a wide
range of AI applications. However, the memory demands posed by the
self-attention mechanism and the large feedforward network in Transformers
limit their ability to handle long sequences, thereby creating challenges for
tasks involving multiple long sequences or long-term dependencies. We present a
distinct approach, Blockwise Parallel Transformer (BPT), that leverages
blockwise computation of self-attention and feedforward network fusion to
minimize memory costs. By processing longer input sequences while maintaining
memory efficiency, BPT enables training sequences up to 32 times longer than
vanilla Transformers and 2 to 4 times longer than previous memory-efficient
methods. Extensive experiments on language modeling and reinforcement learning
tasks demonstrate the effectiveness of BPT in reducing memory requirements and
improving performance.