ChatPaper.aiChatPaper

块变换器:全局到局部语言建模,用于快速推理

Block Transformer: Global-to-Local Language Modeling for Fast Inference

June 4, 2024
作者: Namgyu Ho, Sangmin Bae, Taehyeon Kim, Hyunjik Jo, Yireun Kim, Tal Schuster, Adam Fisch, James Thorne, Se-Young Yun
cs.AI

摘要

本文介绍了块变换器架构,该架构采用分层的全局到局部建模方法来自回归变换器,以减轻自注意力的推理瓶颈。为了应用自注意力,必须在每个解码步骤中从内存中检索所有先前序列的键-值(KV)缓存。因此,这种KV缓存输入输出成为批量推理中的一个重要瓶颈。我们注意到这些成本源于在全局上下文上应用自注意力,因此我们将全局建模的昂贵瓶颈隔离到较低层,并在较高层应用快速局部建模。为了减轻较低层中剩余的成本,我们将输入标记聚合成固定大小的块,然后在这个粗粒度级别应用自注意力。上下文信息被聚合到单个嵌入中,使得上层能够解码下一个标记块,而无需全局注意力。摆脱全局注意力瓶颈后,上层可以充分利用计算硬件,以最大化推理吞吐量。通过利用全局和局部模块,块变换器架构相比等效困惑度的普通变换器实现了10-20倍的推理吞吐量增益。我们的工作通过新颖的全局到局部建模方法,引入了一种优化语言模型推理的新方法。代码可在 https://github.com/itsnamgyu/block-transformer 找到。
English
This paper presents the Block Transformer architecture which adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks of self-attention. To apply self-attention, the key-value (KV) cache of all previous sequences must be retrieved from memory at every decoding step. Thereby, this KV cache IO becomes a significant bottleneck in batch inference. We notice that these costs stem from applying self-attention on the global context, therefore we isolate the expensive bottlenecks of global modeling to lower layers and apply fast local modeling in upper layers. To mitigate the remaining costs in the lower layers, we aggregate input tokens into fixed size blocks and then apply self-attention at this coarse level. Context information is aggregated into a single embedding to enable upper layers to decode the next block of tokens, without global attention. Free of global attention bottlenecks, the upper layers can fully utilize the compute hardware to maximize inference throughput. By leveraging global and local modules, the Block Transformer architecture demonstrates 10-20x gains in inference throughput compared to vanilla transformers with equivalent perplexity. Our work introduces a new approach to optimize language model inference through novel application of global-to-local modeling. Code is available at https://github.com/itsnamgyu/block-transformer.

Summary

AI-Generated Summary

PDF411December 12, 2024