ChatPaper.aiChatPaper

MoA:稀疏注意力混合用于自动大型语言模型压缩

MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

June 21, 2024
作者: Tianyu Fu, Haofeng Huang, Xuefei Ning, Genghan Zhang, Boju Chen, Tianqi Wu, Hongyi Wang, Zixiao Huang, Shiyao Li, Shengen Yan, Guohao Dai, Huazhong Yang, Yu Wang
cs.AI

摘要

稀疏注意力可以有效地减轻大型语言模型(LLMs)在长文本中所需的显著内存和吞吐量需求。现有方法通常采用统一的稀疏注意力掩码,在不同的注意力头和输入长度之间应用相同的稀疏模式。然而,这种统一方法无法捕捉LLMs固有的多样化注意力模式,忽视了它们独特的准确性与延迟之间的权衡。为了解决这一挑战,我们提出了注意力混合(MoA),它可以自动为不同的头部和层级定制不同的稀疏注意力配置。MoA构建并导航各种注意力模式及其相对于输入序列长度的缩放规则的搜索空间。它对模型进行配置文件,评估潜在的配置,并确定最佳的稀疏注意力压缩方案。MoA能够适应不同的输入大小,揭示了一些注意力头部扩展其焦点以适应更长序列,而其他头部则始终集中在固定长度的局部上下文。实验表明,MoA可以将有效上下文长度提高3.9倍,同时保持相同的平均注意力跨度,相对于Vicuna-7B、Vicuna-13B和Llama3-8B模型,将检索准确性提高1.5-7.1倍,超过统一注意力基准线。此外,MoA缩小了稀疏模型和密集模型之间的能力差距,将长文本理解基准测试中的最大相对性能下降从9%-36%减少到5%以内。MoA在单个GPU上为7B和13B密集模型实现了1.2-1.4倍的GPU内存减少,并将解码吞吐量提高了5.5-6.7倍,对性能影响很小。
English
Sparse attention can effectively mitigate the significant memory and throughput demands of Large Language Models (LLMs) in long contexts. Existing methods typically employ a uniform sparse attention mask, applying the same sparse pattern across different attention heads and input lengths. However, this uniform approach fails to capture the diverse attention patterns inherent in LLMs, ignoring their distinct accuracy-latency trade-offs. To address this challenge, we propose the Mixture of Attention (MoA), which automatically tailors distinct sparse attention configurations to different heads and layers. MoA constructs and navigates a search space of various attention patterns and their scaling rules relative to input sequence lengths. It profiles the model, evaluates potential configurations, and pinpoints the optimal sparse attention compression plan. MoA adapts to varying input sizes, revealing that some attention heads expand their focus to accommodate longer sequences, while other heads consistently concentrate on fixed-length local contexts. Experiments show that MoA increases the effective context length by 3.9times with the same average attention span, boosting retrieval accuracy by 1.5-7.1times over the uniform-attention baseline across Vicuna-7B, Vicuna-13B, and Llama3-8B models. Moreover, MoA narrows the capability gaps between sparse and dense models, reducing the maximum relative performance drop from 9%-36% to within 5% across two long-context understanding benchmarks. MoA achieves a 1.2-1.4times GPU memory reduction and boosts decode throughput by 5.5-6.7 times for 7B and 13B dense models on a single GPU, with minimal impact on performance.

Summary

AI-Generated Summary

PDF154November 29, 2024