ChatPaper.aiChatPaper

ChunkAttention:具有前缀感知KV缓存和双阶段分区的高效自注意力机制

ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition

February 23, 2024
作者: Lu Ye, Ze Tao, Yong Huang, Yang Li
cs.AI

摘要

自注意力是大型语言模型(LLMs)的一个重要组成部分,但对于长序列来说,它是推理延迟的一个重要来源。在多租户LLMs服务场景中,通过使用多个LLM请求在前缀中共享系统提示的概率,可以优化自注意力的计算和内存操作成本。在本文中,我们介绍了ChunkAttention,这是一个具有前缀感知能力的自注意力模块,可以在运行时检测多个请求中匹配的提示前缀,并共享它们的键/值张量以改善KV缓存的内存利用率。这是通过将整体的键/值张量分解为较小的块,并将它们结构化到辅助前缀树中来实现的。因此,在基于前缀树的KV缓存之上,我们设计了一个高效的自注意力内核,其中实现了一个两阶段分区算法,以改善在存在共享系统提示时的自注意力计算中的数据局部性。实验证明,与最先进的实现相比,ChunkAttention可以将自注意力内核的速度提高3.2-4.8倍,系统提示的长度范围从1024到4096不等。
English
Self-attention is an essential component of large language models(LLMs) but a significant source of inference latency for long sequences. In multi-tenant LLMs serving scenarios, the compute and memory operation cost of self-attention can be optimized by using the probability that multiple LLM requests have shared system prompts in prefixes. In this paper, we introduce ChunkAttention, a prefix-aware self-attention module that can detect matching prompt prefixes across multiple requests and share their key/value tensors in memory at runtime to improve the memory utilization of KV cache. This is achieved by breaking monolithic key/value tensors into smaller chunks and structuring them into the auxiliary prefix tree. Consequently, on top of the prefix-tree based KV cache, we design an efficient self-attention kernel, where a two-phase partition algorithm is implemented to improve the data locality during self-attention computation in the presence of shared system prompts. Experiments show that ChunkAttention can speed up the self-attention kernel by 3.2-4.8times compared to the start-of-the-art implementation, with the length of the system prompt ranging from 1024 to 4096.
PDF226December 15, 2024