ChatPaper.aiChatPaper

在早期层中发现宝石:通过减少1000倍输入标记加速长上下文LLM

Discovering the Gems in Early Layers: Accelerating Long-Context LLMs with 1000x Input Token Reduction

September 25, 2024
作者: Zhenmei Shi, Yifei Ming, Xuan-Phi Nguyen, Yingyu Liang, Shafiq Joty
cs.AI

摘要

大型语言模型(LLMs)展示了处理长上下文输入的显著能力,但这是以增加计算资源和延迟为代价的。我们的研究引入了一种新颖的方法来加速LLM推理并减少GPU内存消耗,以解决长上下文瓶颈。我们的研究表明,LLMs可以在生成查询答案之前在早期层识别相关标记。利用这一洞察力,我们提出了一种算法,利用LLM的早期层作为过滤器来选择和压缩输入标记,显著减少后续处理的上下文长度。我们的方法GemFilter相较于现有技术(如标准注意力和SnapKV/H2O)在速度和内存效率方面展示了显著的改进。值得注意的是,与SOTA方法相比,它实现了2.4倍的加速和30%的GPU内存使用减少。在“大海捞针”任务上的评估显示,GemFilter在性能上明显优于标准注意力、SnapKV,并在LongBench挑战赛上展示了可比的性能。GemFilter简单、无需训练,并且适用于不同LLMs。至关重要的是,它通过允许人类检查所选输入序列来提供可解释性。这些发现不仅为LLM部署提供了实际好处,还增进了我们对LLM内部机制的理解,为LLM设计和推理的进一步优化铺平了道路。我们的代码可在https://github.com/SalesforceAIResearch/GemFilter找到。
English
Large Language Models (LLMs) have demonstrated remarkable capabilities in handling long context inputs, but this comes at the cost of increased computational resources and latency. Our research introduces a novel approach for the long context bottleneck to accelerate LLM inference and reduce GPU memory consumption. Our research demonstrates that LLMs can identify relevant tokens in the early layers before generating answers to a query. Leveraging this insight, we propose an algorithm that uses early layers of an LLM as filters to select and compress input tokens, significantly reducing the context length for subsequent processing. Our method, GemFilter, demonstrates substantial improvements in both speed and memory efficiency compared to existing techniques, such as standard attention and SnapKV/H2O. Notably, it achieves a 2.4times speedup and 30\% reduction in GPU memory usage compared to SOTA methods. Evaluation on the Needle in a Haystack task shows that GemFilter significantly outperforms standard attention, SnapKV and demonstrates comparable performance on the LongBench challenge. GemFilter is simple, training-free, and broadly applicable across different LLMs. Crucially, it provides interpretability by allowing humans to inspect the selected input sequence. These findings not only offer practical benefits for LLM deployment, but also enhance our understanding of LLM internal mechanisms, paving the way for further optimizations in LLM design and inference. Our code is available at https://github.com/SalesforceAIResearch/GemFilter.

Summary

AI-Generated Summary

PDF265November 16, 2024