KV数据包:面向大语言模型的无重计算上下文无关KV缓存技术
KV Packet: Recomputation-Free Context-Independent KV Caching for LLMs
April 14, 2026
作者: Chuangtao Chen, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Bing Li, Ulf Schlichtmann
cs.AI
摘要
大型语言模型(LLMs)高度依赖键值缓存(KV Caching)来降低推理延迟。然而,标准KV缓存具有上下文依赖性:若要在新语境中复用已缓存文档,需重新计算KV状态以适应注意力分布的动态变化。现有解决方案如CacheBlend、EPIC和SAM-KV通过选择性地重计算部分词元来缓解该问题,但仍会产生不可忽略的计算开销(FLOPs)并增加首词元延迟(TTFT)。本文提出KV Packet——一种免重计算的缓存复用框架,该框架将缓存文档视为不可变的"数据包",并为其封装轻量级可训练的软标记适配器。这些适配器通过自监督蒸馏训练以弥合上下文断层。基于Llama-3.1和Qwen2.5的实验表明,所提KV Packet方法在保持与全重计算基线相当的F1分数同时,实现了接近零的FLOPs消耗,且TTFT低于基于重计算的基线方法。
English
Large Language Models (LLMs) rely heavily on Key-Value (KV) caching to minimize inference latency. However, standard KV caches are context-dependent: reusing a cached document in a new context requires recomputing KV states to account for shifts in attention distribution. Existing solutions such as CacheBlend, EPIC, and SAM-KV mitigate this issue by selectively recomputing a subset of tokens; however, they still incur non-negligible computational overhead (FLOPs) and increased Time-to-First-Token (TTFT) latency. In this paper, we propose KV Packet, a recomputation-free cache reuse framework that treats cached documents as immutable ``packets'' wrapped in light-weight trainable soft-token adapters, which are trained via self-supervised distillation to bridge context discontinuities. Experiments on Llama-3.1 and Qwen2.5 demonstrate that the proposed KV Packet method achieves near-zero FLOPs and lower TTFT than recomputation-based baselines, while retaining F1 scores comparable to those of the full recomputation baseline.