SnapKV:在生成之前,LLM知道您正在尋找的內容
SnapKV: LLM Knows What You are Looking for Before Generation
April 22, 2024
作者: Yuhong Li, Yingbing Huang, Bowen Yang, Bharat Venkitesh, Acyr Locatelli, Hanchen Ye, Tianle Cai, Patrick Lewis, Deming Chen
cs.AI
摘要
大型語言模型(LLMs)在處理廣泛上下文方面取得了顯著進展,其中關鍵-值(KV)緩存在增強其性能方面發揮了至關重要的作用。然而,為應對輸入長度增加而增長的KV緩存對內存和時間效率提出了挑戰。為解決這個問題,本文介紹了SnapKV,這是一種創新且無需微調的方法,可以在保持在實際應用中可比性能的同時有效地最小化KV緩存大小。
我們發現模型中的每個注意力頭在生成過程中始終專注於特定提示注意力特徵。與此同時,這種穩健的模式可以從位於提示末尾的“觀察”窗口中獲得。基於這一見解,SnapKV通過為每個注意力頭選擇聚類重要的KV位置,自動壓縮KV緩存。我們的方法在處理長輸入序列時顯著降低了不斷增長的計算開銷和內存佔用。具體而言,SnapKV在處理16K令牌的輸入時實現了一致的解碼速度,生成速度提高了3.6倍,內存效率提高了8.2倍,與基準模型相比性能保持可比。此外,SnapKV可以在單個A100-80GB GPU上處理高達380K上下文令牌,使用HuggingFace實現僅需進行輕微更改,在“針芥堆中的針”測試中僅表現出微不足道的準確性下降。進一步的全面研究表明SnapKV在實際應用中具有潛力。
English
Large Language Models (LLMs) have made remarkable progress in processing
extensive contexts, with the Key-Value (KV) cache playing a vital role in
enhancing their performance. However, the growth of the KV cache in response to
increasing input length poses challenges to memory and time efficiency. To
address this problem, this paper introduces SnapKV, an innovative and
fine-tuning-free approach that efficiently minimizes KV cache size while still
delivering comparable performance in real-world applications.
We discover that each attention head in the model consistently focuses on
specific prompt attention features during generation. Meanwhile, this robust
pattern can be obtained from an `observation' window located at the end of the
prompts. Drawing on this insight, SnapKV automatically compresses KV caches by
selecting clustered important KV positions for each attention head. Our
approach significantly reduces the growing computational overhead and memory
footprint when processing long input sequences. Specifically, SnapKV achieves a
consistent decoding speed with a 3.6x increase in generation speed and an 8.2x
enhancement in memory efficiency compared to baseline when processing inputs of
16K tokens. At the same time, it maintains comparable performance to baseline
models across 16 long sequence datasets. Moreover, SnapKV can process up to
380K context tokens on a single A100-80GB GPU using HuggingFace implementation
with minor changes, exhibiting only a negligible accuracy drop in the
Needle-in-a-Haystack test. Further comprehensive studies suggest SnapKV's
potential for practical applications.Summary
AI-Generated Summary