通過跨層注意力減少Transformer鍵-值緩存的大小
Reducing Transformer Key-Value Cache Size with Cross-Layer Attention
May 21, 2024
作者: William Brandon, Mayank Mishra, Aniruddha Nrusimha, Rameswar Panda, Jonathan Ragan Kelly
cs.AI
摘要
鍵-值(KV)緩存在加速基於變壓器的自回歸大型語言模型(LLM)的解碼中發揮著重要作用。然而,在長序列長度和大批量大小時,存儲KV緩存所需的記憶體量可能變得過高。自變壓器的發明以來,為減少KV緩存大小發現的兩種最有效的方法是多查詢注意力(MQA)及其泛化形式組查詢注意力(GQA)。MQA和GQA都修改了注意力塊的設計,使多個查詢頭可以共享單個鍵/值頭,大幅減少不同鍵/值頭的數量,同時僅對準確性造成輕微影響。本文中,我們展示了可以通過在相鄰層之間共享鍵和值頭,將多查詢注意力推進一步,從而產生一種我們稱為跨層注意力(CLA)的新型注意力設計。通過CLA,我們發現可以將KV緩存大小再次減少2倍,同時保持幾乎與未修改的MQA相同的準確性。在從頭開始訓練10億和30億參數模型的實驗中,我們展示了CLA相對於傳統MQA可能的記憶體/準確性折衷提供了帕累托改進,實現了比傳統方法更長序列長度和更大批量大小的推論。
English
Key-value (KV) caching plays an essential role in accelerating decoding for
transformer-based autoregressive large language models (LLMs). However, the
amount of memory required to store the KV cache can become prohibitive at long
sequence lengths and large batch sizes. Since the invention of the transformer,
two of the most effective interventions discovered for reducing the size of the
KV cache have been Multi-Query Attention (MQA) and its generalization,
Grouped-Query Attention (GQA). MQA and GQA both modify the design of the
attention block so that multiple query heads can share a single key/value head,
reducing the number of distinct key/value heads by a large factor while only
minimally degrading accuracy. In this paper, we show that it is possible to
take Multi-Query Attention a step further by also sharing key and value heads
between adjacent layers, yielding a new attention design we call Cross-Layer
Attention (CLA). With CLA, we find that it is possible to reduce the size of
the KV cache by another 2x while maintaining nearly the same accuracy as
unmodified MQA. In experiments training 1B- and 3B-parameter models from
scratch, we demonstrate that CLA provides a Pareto improvement over the
memory/accuracy tradeoffs which are possible with traditional MQA, enabling
inference with longer sequence lengths and larger batch sizes than would
otherwise be possibleSummary
AI-Generated Summary