CommVQ:用於KV緩存壓縮的交換式向量量化
CommVQ: Commutative Vector Quantization for KV Cache Compression
June 23, 2025
作者: Junyan Li, Yang Zhang, Muhammad Yusuf Hassan, Talha Chafekar, Tianle Cai, Zhile Ren, Pengsheng Guo, Foroozan Karimzadeh, Colorado Reed, Chong Wang, Chuang Gan
cs.AI
摘要
大型語言模型(LLMs)在需要長上下文長度的應用中日益普及,但隨著上下文增長,鍵值(KV)快取往往成為GPU上的記憶體瓶頸。為解決此問題,我們提出了可交換向量量化(CommVQ),以顯著降低長上下文LLM推理的記憶體使用量。我們首先引入了一種帶有輕量級編碼器和碼本的加法量化方法來壓縮KV快取,該快取可通過簡單的矩陣乘法解碼。為了進一步降低解碼過程中的計算成本,我們設計了與旋轉位置嵌入(RoPE)可交換的碼本,並使用期望最大化(EM)算法進行訓練。這使得解碼能夠高效地整合到自注意力機制中。我們的方法通過加法量化實現了高精度,並通過RoPE可交換碼本實現了低開銷。在長上下文基準測試和GSM8K上的實驗表明,我們的方法在2位量化下將FP16 KV快取大小減少了87.5%,同時優於最先進的KV快取量化方法。值得注意的是,它實現了1位KV快取量化,且精度損失極小,使LLaMA-3.1 8B模型能夠在單個RTX 4090 GPU上運行128K的上下文長度。源代碼可在以下網址獲取:https://github.com/UMass-Embodied-AGI/CommVQ。
English
Large Language Models (LLMs) are increasingly used in applications requiring
long context lengths, but the key-value (KV) cache often becomes a memory
bottleneck on GPUs as context grows. To address this, we propose Commutative
Vector Quantization (CommVQ) to significantly reduce memory usage for
long-context LLM inference. We first introduce additive quantization with a
lightweight encoder and codebook to compress the KV cache, which can be decoded
via simple matrix multiplication. To further reduce computational costs during
decoding, we design the codebook to be commutative with Rotary Position
Embedding (RoPE) and train it using an Expectation-Maximization (EM) algorithm.
This enables efficient integration of decoding into the self-attention
mechanism. Our approach achieves high accuracy with additive quantization and
low overhead via the RoPE-commutative codebook. Experiments on long-context
benchmarks and GSM8K show that our method reduces FP16 KV cache size by 87.5%
with 2-bit quantization, while outperforming state-of-the-art KV cache
quantization methods. Notably, it enables 1-bit KV cache quantization with
minimal accuracy loss, allowing a LLaMA-3.1 8B model to run with a 128K context
length on a single RTX 4090 GPU. The source code is available at:
https://github.com/UMass-Embodied-AGI/CommVQ.