ChatPaper.aiChatPaper

MInference 1.0:通过动态稀疏注意力加速长文本语言模型的预填充

MInference 1.0: Accelerating Pre-filling for Long-Context LLMs via Dynamic Sparse Attention

July 2, 2024
作者: Huiqiang Jiang, Yucheng Li, Chengruidong Zhang, Qianhui Wu, Xufang Luo, Surin Ahn, Zhenhua Han, Amir H. Abdi, Dongsheng Li, Chin-Yew Lin, Yuqing Yang, Lili Qiu
cs.AI

摘要

大型语言模型(LLM)推理的计算挑战仍然是它们广泛部署的重要障碍,特别是随着提示长度的增加。由于注意力计算的二次复杂度,一个8B的LLM在单个A100 GPU上处理100万标记的提示(即预填充阶段)需要30分钟。现有的加速预填充的方法在应用于长上下文LLM时往往无法保持可接受的准确性或效率。为了解决这一问题,我们引入了MInference(百万标记推理),这是一种稀疏计算方法,旨在加速长序列处理的预填充。具体而言,我们在长上下文注意力矩阵中确定了三种独特的模式——A形状、竖斜杠和块稀疏,可以利用这些模式在GPU上进行高效的稀疏计算。我们离线确定每个注意力头的最佳模式,并根据分配的模式在推理过程中动态构建稀疏索引。借助这些模式和稀疏索引,我们通过优化的GPU核心执行高效的稀疏注意力计算,显著减少长上下文LLM预填充阶段的延迟。我们提出的技术可以直接应用于现有的LLM,无需对预训练设置或额外微调进行任何修改。通过在一系列下游任务上进行评估,包括InfiniteBench、RULER、PG-19和Needle In A Haystack,以及LLaMA-3-1M、GLM4-1M、Yi-200K、Phi-3-128K和Qwen2-128K等模型,我们证明了MInference可以在A100上将预填充的推理延迟有效降低多达10倍,同时保持准确性。我们的代码可在https://aka.ms/MInference上找到。
English
The computational challenges of Large Language Model (LLM) inference remain a significant barrier to their widespread deployment, especially as prompt lengths continue to increase. Due to the quadratic complexity of the attention computation, it takes 30 minutes for an 8B LLM to process a prompt of 1M tokens (i.e., the pre-filling stage) on a single A100 GPU. Existing methods for speeding up prefilling often fail to maintain acceptable accuracy or efficiency when applied to long-context LLMs. To address this gap, we introduce MInference (Milliontokens Inference), a sparse calculation method designed to accelerate pre-filling of long-sequence processing. Specifically, we identify three unique patterns in long-context attention matrices-the A-shape, Vertical-Slash, and Block-Sparsethat can be leveraged for efficient sparse computation on GPUs. We determine the optimal pattern for each attention head offline and dynamically build sparse indices based on the assigned pattern during inference. With the pattern and sparse indices, we perform efficient sparse attention calculations via our optimized GPU kernels to significantly reduce the latency in the pre-filling stage of long-context LLMs. Our proposed technique can be directly applied to existing LLMs without any modifications to the pre-training setup or additional fine-tuning. By evaluating on a wide range of downstream tasks, including InfiniteBench, RULER, PG-19, and Needle In A Haystack, and models including LLaMA-3-1M, GLM4-1M, Yi-200K, Phi-3-128K, and Qwen2-128K, we demonstrate that MInference effectively reduces inference latency by up to 10x for pre-filling on an A100, while maintaining accuracy. Our code is available at https://aka.ms/MInference.

Summary

AI-Generated Summary

PDF264November 28, 2024