區塊並行Transformer 用於長上下文大型模型
Blockwise Parallel Transformer for Long Context Large Models
May 30, 2023
作者: Hao Liu, Pieter Abbeel
cs.AI
摘要
Transformer已成為最先進的自然語言處理模型的基石,展示出在各種人工智慧應用中卓越的表現。然而,Transformer中的自注意機制和龐大的前饋網絡所提出的記憶需求限制了它們處理長序列的能力,因此對涉及多個長序列或長期依賴的任務構成挑戰。我們提出了一種獨特的方法,即區塊並行Transformer(BPT),它利用區塊式計算自注意和前饋網絡融合來降低記憶成本。通過處理更長的輸入序列並保持記憶效率,BPT使得訓練序列的長度可達到比普通Transformer長32倍,比先前的記憶效率方法長2至4倍。在語言建模和強化學習任務上進行的大量實驗顯示了BPT在減少記憶需求和提高性能方面的有效性。
English
Transformers have emerged as the cornerstone of state-of-the-art natural
language processing models, showcasing exceptional performance across a wide
range of AI applications. However, the memory demands posed by the
self-attention mechanism and the large feedforward network in Transformers
limit their ability to handle long sequences, thereby creating challenges for
tasks involving multiple long sequences or long-term dependencies. We present a
distinct approach, Blockwise Parallel Transformer (BPT), that leverages
blockwise computation of self-attention and feedforward network fusion to
minimize memory costs. By processing longer input sequences while maintaining
memory efficiency, BPT enables training sequences up to 32 times longer than
vanilla Transformers and 2 to 4 times longer than previous memory-efficient
methods. Extensive experiments on language modeling and reinforcement learning
tasks demonstrate the effectiveness of BPT in reducing memory requirements and
improving performance.