ChatPaper.aiChatPaper

LLoCO:离线学习长上下文

LLoCO: Learning Long Contexts Offline

April 11, 2024
作者: Sijun Tan, Xiuyu Li, Shishir Patil, Ziyang Wu, Tianjun Zhang, Kurt Keutzer, Joseph E. Gonzalez, Raluca Ada Popa
cs.AI

摘要

由于自注意机制的二次计算和内存开销以及在生成过程中的大量KV缓存大小,处理长上下文对大型语言模型(LLMs)仍然是一个挑战。我们提出了一种新方法来解决这个问题,通过离线学习上下文,通过上下文压缩和领域内参数高效微调。我们的方法使LLM能够创建原始上下文的简洁表示,并有效地检索相关信息以准确回答问题。我们引入了LLoCO,这是一种使用LoRA结合上下文压缩、检索和参数高效微调的技术。我们的方法将4k标记LLaMA2-7B模型的有效上下文窗口扩展到处理高达128k标记。我们在几个长上下文问答数据集上评估了我们的方法,结果表明LLoCO在推理过程中使用的标记数量比上下文学习少30倍,性能显著优于上下文学习。LLoCO实现了高达7.62倍的加速,并大幅降低了长文档问答的成本,使其成为处理长上下文高效的有前景的解决方案。我们的代码可在https://github.com/jeffreysijuntan/lloco 公开获取。
English
Processing long contexts remains a challenge for large language models (LLMs) due to the quadratic computational and memory overhead of the self-attention mechanism and the substantial KV cache sizes during generation. We propose a novel approach to address this problem by learning contexts offline through context compression and in-domain parameter-efficient finetuning. Our method enables an LLM to create a concise representation of the original context and efficiently retrieve relevant information to answer questions accurately. We introduce LLoCO, a technique that combines context compression, retrieval, and parameter-efficient finetuning using LoRA. Our approach extends the effective context window of a 4k token LLaMA2-7B model to handle up to 128k tokens. We evaluate our approach on several long-context question-answering datasets, demonstrating that LLoCO significantly outperforms in-context learning while using 30times fewer tokens during inference. LLoCO achieves up to 7.62times speed-up and substantially reduces the cost of long document question answering, making it a promising solution for efficient long context processing. Our code is publicly available at https://github.com/jeffreysijuntan/lloco.

Summary

AI-Generated Summary

PDF232December 15, 2024