ChatPaper.aiChatPaper

OneBit:朝向極低位元大型語言模型

OneBit: Towards Extremely Low-bit Large Language Models

February 17, 2024
作者: Yuzhuang Xu, Xu Han, Zonghan Yang, Shuo Wang, Qingfu Zhu, Zhiyuan Liu, Weidong Liu, Wanxiang Che
cs.AI

摘要

模型量化使用低位寬值來表示模型的權重矩陣,這是一種有前途的方法,可以減少部署高度期待的LLM時的存儲和計算開銷。然而,現有的量化方法在位寬極度降低時會遭受嚴重的性能下降,因此專注於利用4位元或8位元值來量化模型。本文大膽地將LLM的權重矩陣量化為1位元,為LLM的極低位寬部署鋪平了道路。為了實現這一目標,我們引入了一個名為OneBit的1位元量化感知訓練(QAT)框架,包括一種新穎的1位元參數表示方法,以更好地量化LLM,以及一種基於矩陣分解的有效參數初始化方法,以提高QAT框架的收斂速度。充足的實驗結果表明,OneBit在僅使用1位元權重矩陣時實現了良好的性能(至少達到非量化性能的83%),並具有穩健的訓練過程。
English
Model quantification uses low bit-width values to represent the weight matrices of models, which is a promising approach to reduce both storage and computational overheads of deploying highly anticipated LLMs. However, existing quantization methods suffer severe performance degradation when the bit-width is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to quantize models. This paper boldly quantizes the weight matrices of LLMs to 1-bit, paving the way for the extremely low bit-width deployment of LLMs. For this target, we introduce a 1-bit quantization-aware training (QAT) framework named OneBit, including a novel 1-bit parameter representation method to better quantize LLMs as well as an effective parameter initialization method based on matrix decomposition to improve the convergence speed of the QAT framework. Sufficient experimental results indicate that OneBit achieves good performance (at least 83% of the non-quantized performance) with robust training processes when only using 1-bit weight matrices.

Summary

AI-Generated Summary

PDF2513December 15, 2024