KV缓存压缩的潜在缺陷
The Pitfalls of KV Cache Compression
September 30, 2025
作者: Alex Chen, Renato Geh, Aditya Grover, Guy Van den Broeck, Daniel Israel
cs.AI
摘要
KV緩存壓縮技術承諾在性能損失可忽略的情況下提升吞吐量和效率。雖然吞吐量的提升毋庸置疑,且近期文獻確實顯示在特定基準測試中性能下降極小,但在多指令提示等現實場景中,壓縮的影響尚未得到充分研究。本文中,我們指出了實踐者在部署KV緩存壓縮的大型語言模型(LLM)時應注意的幾個陷阱。重要的是,我們發現某些指令在壓縮下性能下降得更快,實際上導致LLM完全忽略這些指令。作為一個實際案例,我們以系統提示洩漏為例,通過實驗展示了壓縮對洩漏及一般指令遵循的影響。我們揭示了影響提示洩漏的幾個因素:壓縮方法、指令順序以及KV淘汰偏見。隨後,我們提出了對KV緩存淘汰策略的簡單改進,這些改進可以減輕這些因素的影響,並提升多指令任務的整體性能。
English
KV cache compression promises increased throughput and efficiency with
negligible loss in performance. While the gains in throughput are indisputable
and recent literature has indeed shown minimal degradation on particular
benchmarks, in general the consequences of compression in realistic scenarios
such as multi-instruction prompting have been insufficiently studied. In this
paper, we identify several pitfalls practitioners should be aware of when
deploying KV cache compressed LLMs. Importantly, we show that certain
instructions degrade much more rapidly with compression, effectively causing
them to be completely ignored by the LLM. As a practical example of that, we
highlight system prompt leakage as a case study, empirically showing the impact
of compression on leakage and general instruction following. We show several
factors that play a role in prompt leakage: compression method, instruction
order, and KV eviction bias. We then propose simple changes to KV cache
eviction policies that can reduce the impact of these factors and improve the
overall performance in multi-instruction tasks.