ChatPaper.aiChatPaper

區塊Transformer:全域至局部語言建模以加速推論

Block Transformer: Global-to-Local Language Modeling for Fast Inference

June 4, 2024
作者: Namgyu Ho, Sangmin Bae, Taehyeon Kim, Hyunjik Jo, Yireun Kim, Tal Schuster, Adam Fisch, James Thorne, Se-Young Yun
cs.AI

摘要

本文介紹了採用階層式全域至本地建模的區塊Transformer架構,以減輕自迴歸Transformer的推論瓶頸。為了應用自注意力,必須在每個解碼步驟從記憶中檢索所有先前序列的關鍵-值(KV)快取。因此,這個KV快取IO在批次推論中成為一個重要瓶頸。我們注意到這些成本源於對全域上下文應用自注意力,因此我們將全域建模的昂貴瓶頸隔離到較低層,並在較高層應用快速本地建模。為了減輕較低層中剩餘的成本,我們將輸入標記聚合成固定大小的區塊,然後在這個粗粒度層次應用自注意力。上下文信息被聚合成單一嵌入,使得上層能夠解碼下一個標記區塊,而無需全域注意力。摆脱全域注意力瓶颈后,上层可以充分利用計算硬件,以最大化推理吞吐量。通過利用全域和本地模塊,區塊Transformer架構展示了與等效困惑度的普通Transformer相比10-20倍的推理吞吐量增益。我們的工作通過新的全域至本地建模應用,引入了一種優化語言模型推理的新方法。代碼可在https://github.com/itsnamgyu/block-transformer找到。
English
This paper presents the Block Transformer architecture which adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks of self-attention. To apply self-attention, the key-value (KV) cache of all previous sequences must be retrieved from memory at every decoding step. Thereby, this KV cache IO becomes a significant bottleneck in batch inference. We notice that these costs stem from applying self-attention on the global context, therefore we isolate the expensive bottlenecks of global modeling to lower layers and apply fast local modeling in upper layers. To mitigate the remaining costs in the lower layers, we aggregate input tokens into fixed size blocks and then apply self-attention at this coarse level. Context information is aggregated into a single embedding to enable upper layers to decode the next block of tokens, without global attention. Free of global attention bottlenecks, the upper layers can fully utilize the compute hardware to maximize inference throughput. By leveraging global and local modules, the Block Transformer architecture demonstrates 10-20x gains in inference throughput compared to vanilla transformers with equivalent perplexity. Our work introduces a new approach to optimize language model inference through novel application of global-to-local modeling. Code is available at https://github.com/itsnamgyu/block-transformer.

Summary

AI-Generated Summary

PDF411December 12, 2024