Finch:提示引導式鍵-值快取壓縮
Finch: Prompt-guided Key-Value Cache Compression
July 31, 2024
作者: Giulio Corallo, Paolo Papotti
cs.AI
摘要
最近大型語言模型應用,如檢索增強生成和聊天機器人,導致了對處理更長輸入上下文的增加需求。然而,這種需求受到固有限制的阻礙。從架構上看,模型受到在訓練期間定義的上下文窗口的限制。此外,處理廣泛文本需要大量的GPU內存。我們提出了一種新方法,稱為Finch,通過利用自注意力的預訓練模型權重來壓縮輸入上下文。給定一個提示和一段長文本,Finch通過在提示條件下對文本片段進行迭代,識別最相關的關鍵(K)和值(V)對。只有這樣的對被存儲在KV緩存中,該緩存最終包含了長文本的壓縮版本,並且在上下文窗口的空間限制內。我們的提議使模型能夠消耗大量輸入,即使進行高度壓縮(高達93倍),同時保持語義完整性,而無需進行微調。
English
Recent large language model applications, such as Retrieval-Augmented
Generation and chatbots, have led to an increased need to process longer input
contexts. However, this requirement is hampered by inherent limitations.
Architecturally, models are constrained by a context window defined during
training. Additionally, processing extensive texts requires substantial GPU
memory. We propose a novel approach, Finch, to compress the input context by
leveraging the pre-trained model weights of the self-attention. Given a prompt
and a long text, Finch iteratively identifies the most relevant Key (K) and
Value (V) pairs over chunks of the text conditioned on the prompt. Only such
pairs are stored in the KV cache, which, within the space constrained by the
context window, ultimately contains a compressed version of the long text. Our
proposal enables models to consume large inputs even with high compression (up
to 93x) while preserving semantic integrity without the need for fine-tuning.Summary
AI-Generated Summary