大型语言模型中用于上下文压缩的上下文自编码器
In-context Autoencoder for Context Compression in a Large Language Model
July 13, 2023
作者: Tao Ge, Jing Hu, Xun Wang, Si-Qing Chen, Furu Wei
cs.AI
摘要
我们提出了In-context Autoencoder (ICAE) 用于大型语言模型 (LLM) 中的上下文压缩。ICAE 包括两个模块:一个可学习的编码器,通过LoRA从LLM进行调整,用于将长上下文压缩为有限数量的记忆槽,以及一个固定的解码器,即目标LLM,可以根据记忆槽进行各种目的的条件设置。我们首先在大量文本数据上使用自编码和语言建模目标对ICAE进行预训练,使其能够生成准确全面地代表原始上下文的记忆槽。然后,我们在少量指导数据上对预训练的ICAE进行微调,以增强其与各种提示的交互,以生成理想的响应。我们的实验结果表明,通过我们提出的预训练和微调范式学习的ICAE能够有效地生成具有4倍上下文压缩的记忆槽,目标LLM可以很好地对其进行条件设置,以响应各种提示。这些有前途的结果显示了ICAE对长上下文问题的新方法以及在实践中减少LLM推理的计算和内存开销的潜力,建议进一步研究LLM的上下文管理。我们的代码和数据将很快发布。
English
We propose the In-context Autoencoder (ICAE) for context compression in a
large language model (LLM). The ICAE has two modules: a learnable encoder
adapted with LoRA from an LLM for compressing a long context into a limited
number of memory slots, and a fixed decoder which is the target LLM that can
condition on the memory slots for various purposes. We first pretrain the ICAE
using both autoencoding and language modeling objectives on massive text data,
enabling it to generate memory slots that accurately and comprehensively
represent the original context. Then, we fine-tune the pretrained ICAE on a
small amount of instruct data to enhance its interaction with various prompts
for producing desirable responses. Our experimental results demonstrate that
the ICAE learned with our proposed pretraining and fine-tuning paradigm can
effectively produce memory slots with 4times context compression, which can
be well conditioned on by the target LLM to respond to various prompts. The
promising results demonstrate significant implications of the ICAE for its
novel approach to the long context problem and its potential to reduce
computation and memory overheads for LLM inference in practice, suggesting
further research effort in context management for an LLM. Our code and data
will be released shortly.