LoftQ:针对大型语言模型的LoRA微调感知量化
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
October 12, 2023
作者: Yixiao Li, Yifan Yu, Chen Liang, Pengcheng He, Nikos Karampatziakis, Weizhu Chen, Tuo Zhao
cs.AI
摘要
量化是为大型语言模型(LLMs)提供服务的一种不可或缺的技术,最近已经被引入LoRA微调。在这项工作中,我们专注于将量化和LoRA微调应用于预训练模型的情况。在这种情况下,通常会观察到全面微调和量化加LoRA微调方法在下游任务性能上存在一致的差距。为此,我们提出了LoftQ(LoRA微调感知量化),这是一种新颖的量化框架,可以同时对LLM进行量化,并为LoRA微调找到适当的低秩初始化。这种初始化有助于减轻量化模型和全精度模型之间的差异,并显著提高下游任务的泛化能力。我们在自然语言理解、问答、摘要和自然语言生成任务上评估了我们的方法。实验表明,我们的方法非常有效,在具有挑战性的2位和2/4位混合精度范围中特别优于现有的量化方法。我们将发布我们的代码。
English
Quantization is an indispensable technique for serving Large Language Models
(LLMs) and has recently found its way into LoRA fine-tuning. In this work we
focus on the scenario where quantization and LoRA fine-tuning are applied
together on a pre-trained model. In such cases it is common to observe a
consistent gap in the performance on downstream tasks between full fine-tuning
and quantization plus LoRA fine-tuning approach. In response, we propose LoftQ
(LoRA-Fine-Tuning-aware Quantization), a novel quantization framework that
simultaneously quantizes an LLM and finds a proper low-rank initialization for
LoRA fine-tuning. Such an initialization alleviates the discrepancy between the
quantized and full-precision model and significantly improves the
generalization in downstream tasks. We evaluate our method on natural language
understanding, question answering, summarization, and natural language
generation tasks. Experiments show that our method is highly effective and
outperforms existing quantization methods, especially in the challenging 2-bit
and 2/4-bit mixed precision regimes. We will release our code.