即插即用1.x位键值缓存量化技术应用于视频大语言模型
Plug-and-Play 1.x-Bit KV Cache Quantization for Video Large Language Models
March 20, 2025
作者: Keda Tao, Haoxuan You, Yang Sui, Can Qin, Huan Wang
cs.AI
摘要
视频大语言模型(VideoLLMs)已展现出处理更长视频输入并实现复杂推理与分析的能力。然而,由于视频帧产生的数千个视觉标记,键值(KV)缓存会显著增加内存需求,成为推理速度和内存使用的瓶颈。KV缓存量化是解决这一问题的常用方法。本文发现,对VideoLLMs进行2位KV量化几乎不会影响模型性能,而更低比特位KV缓存量化的极限尚未被探索。为填补这一空白,我们提出了VidKV,一种即插即用的KV缓存量化方法,将KV缓存压缩至低于2位。具体而言,(1)对于键,我们提出了一种通道维度的混合精度量化策略,对异常通道执行2位量化,而对正常通道则采用1位量化结合快速傅里叶变换(FFT);(2)对于值,我们实施了1.58位量化,同时选择性过滤语义显著的视觉标记以进行针对性保留,从而在精度与模型性能之间取得更好的平衡。重要的是,我们的研究结果表明,VideoLLMs的值缓存应按照每通道而非先前KV缓存量化工作中提出的每标记方式进行量化。实证中,基于LLaVA-OV-7B和Qwen2.5-VL-7B在六个基准测试上的广泛结果显示,VidKV有效地将KV缓存压缩至1.5位和1.58位精度,与FP16版本相比几乎无性能损失。
English
Video large language models (VideoLLMs) have demonstrated the capability to
process longer video inputs and enable complex reasoning and analysis. However,
due to the thousands of visual tokens from the video frames, key-value (KV)
cache can significantly increase memory requirements, becoming a bottleneck for
inference speed and memory usage. KV cache quantization is a widely used
approach to address this problem. In this paper, we find that 2-bit KV
quantization of VideoLLMs can hardly hurt the model performance, while the
limit of KV cache quantization in even lower bits has not been investigated. To
bridge this gap, we introduce VidKV, a plug-and-play KV cache quantization
method to compress the KV cache to lower than 2 bits. Specifically, (1) for
key, we propose a mixed-precision quantization strategy in the channel
dimension, where we perform 2-bit quantization for anomalous channels and 1-bit
quantization combined with FFT for normal channels; (2) for value, we implement
1.58-bit quantization while selectively filtering semantically salient visual
tokens for targeted preservation, for a better trade-off between precision and
model performance. Importantly, our findings suggest that the value cache of
VideoLLMs should be quantized in a per-channel fashion instead of the per-token
fashion proposed by prior KV cache quantization works for LLMs. Empirically,
extensive results with LLaVA-OV-7B and Qwen2.5-VL-7B on six benchmarks show
that VidKV effectively compresses the KV cache to 1.5-bit and 1.58-bit
precision with almost no performance drop compared to the FP16 counterparts.Summary
AI-Generated Summary