Q-Filters:利用QK幾何實現高效的KV緩存壓縮
Q-Filters: Leveraging QK Geometry for Efficient KV Cache Compression
March 4, 2025
作者: Nathan Godey, Alessio Devoto, Yu Zhao, Simone Scardapane, Pasquale Minervini, Éric de la Clergerie, Benoît Sagot
cs.AI
摘要
自迴歸語言模型依賴於鍵值(KV)快取,這避免了在生成過程中重新計算過去的隱藏狀態,從而加快了速度。隨著模型規模和上下文長度的增長,KV 快取成為顯著的記憶體瓶頸,這就需要在生成過程中限制其大小的壓縮方法。在本文中,我們發現了查詢(Q)和鍵(K)向量的驚人特性,這些特性使我們能夠在不計算注意力圖的情況下高效地近似注意力分數。我們提出了 Q-Filters,這是一種無需訓練的 KV 快取壓縮方法,它基於單一的上下文無關投影過濾掉不太重要的鍵值對。與許多替代方案不同,Q-Filters 與 FlashAttention 兼容,因為它不需要直接訪問注意力權重。在長上下文設置中的實驗結果表明,Q-Filters 在檢索任務中與基於注意力的壓縮方法(如 SnapKV)競爭,同時在生成設置中始終優於高效的壓縮方案(如 Streaming-LLM)。值得注意的是,Q-Filters 在 x32 壓縮級別下在「大海撈針」任務中達到了 99% 的準確率,同時在文本生成中將生成困惑度下降減少了高達 65%,相比於 Streaming-LLM。
English
Autoregressive language models rely on a Key-Value (KV) Cache, which avoids
re-computing past hidden states during generation, making it faster. As model
sizes and context lengths grow, the KV Cache becomes a significant memory
bottleneck, which calls for compression methods that limit its size during
generation. In this paper, we discover surprising properties of Query (Q) and
Key (K) vectors that allow us to efficiently approximate attention scores
without computing the attention maps. We propose Q-Filters, a training-free KV
Cache compression method that filters out less crucial Key-Value pairs based on
a single context-agnostic projection. Contrarily to many alternatives,
Q-Filters is compatible with FlashAttention, as it does not require direct
access to attention weights. Experimental results in long-context settings
demonstrate that Q-Filters is competitive with attention-based compression
methods such as SnapKV in retrieval tasks while consistently outperforming
efficient compression schemes such as Streaming-LLM in generation setups.
Notably, Q-Filters achieves a 99% accuracy in the needle-in-a-haystack task
with a x32 compression level while reducing the generation perplexity drop by
up to 65% in text generation compared to Streaming-LLM.Summary
AI-Generated Summary