ChatPaper.aiChatPaper

SwiftKV:具有知识保留模型转换的快速预填充优化推断

SwiftKV: Fast Prefill-Optimized Inference with Knowledge-Preserving Model Transformation

October 4, 2024
作者: Aurick Qiao, Zhewei Yao, Samyam Rajbhandari, Yuxiong He
cs.AI

摘要

对于流行的企业用例,如摘要、RAG和代码生成,LLM推理通常观察到比生成长度长几个数量级的提示长度。这一特征导致预填充的高成本和响应延迟增加。在本文中,我们提出了SwiftKV,这是一种新颖的模型转换和蒸馏过程,专门设计用于减少处理提示标记的时间和成本,同时保持生成标记的高质量。SwiftKV结合了三个关键机制:i)SingleInputKV,使用较早层的输出填充后续层的KV缓存,使提示标记能够跳过大部分模型计算,ii)AcrossKV,合并相邻层的KV缓存以减少内存占用并支持更大的批处理大小以提高吞吐量,以及iii)一种保留知识的蒸馏过程,可以通过最小的准确性影响和低计算和数据需求,使现有的LLMs适应SwiftKV。对于Llama-3.1-8B和70B,SwiftKV将预填充的计算需求降低了50%,将KV缓存的内存需求降低了62.5%,同时在各种任务中产生了最小的质量降级。在使用经过优化的vLLM实现的端到端推理服务中,SwiftKV实现了高达2倍的总吞吐量和60%更低的每个输出标记的时间。它可以实现惊人的560 TFlops/GPU的标准化推理吞吐量,这相当于在4个H100 GPU上以16位精度为Llama-3.1-70B每秒处理16K标记。
English
LLM inference for popular enterprise use cases, such as summarization, RAG, and code-generation, typically observes orders of magnitude longer prompt lengths than generation lengths. This characteristic leads to high cost of prefill and increased response latency. In this paper, we present SwiftKV, a novel model transformation and distillation procedure specifically designed to reduce the time and cost of processing prompt tokens while preserving high quality of generated tokens. SwiftKV combines three key mechanisms: i) SingleInputKV, which prefills later layers' KV cache using a much earlier layer's output, allowing prompt tokens to skip much of the model computation, ii) AcrossKV, which merges the KV caches of neighboring layers to reduce the memory footprint and support larger batch size for higher throughput, and iii) a knowledge-preserving distillation procedure that can adapt existing LLMs for SwiftKV with minimal accuracy impact and low compute and data requirement. For Llama-3.1-8B and 70B, SwiftKV reduces the compute requirement of prefill by 50% and the memory requirement of the KV cache by 62.5% while incurring minimum quality degradation across a wide range of tasks. In the end-to-end inference serving using an optimized vLLM implementation, SwiftKV realizes up to 2x higher aggregate throughput and 60% lower time per output token. It can achieve a staggering 560 TFlops/GPU of normalized inference throughput, which translates to 16K tokens/s for Llama-3.1-70B in 16-bit precision on 4x H100 GPUs.

Summary

AI-Generated Summary

PDF22November 16, 2024