OneBit:走向极低比特大型语言模型
OneBit: Towards Extremely Low-bit Large Language Models
February 17, 2024
作者: Yuzhuang Xu, Xu Han, Zonghan Yang, Shuo Wang, Qingfu Zhu, Zhiyuan Liu, Weidong Liu, Wanxiang Che
cs.AI
摘要
模型量化使用低比特宽度的数值来表示模型的权重矩阵,这是一种有前途的方法,可以减少部署高度期望的LLM时的存储和计算开销。然而,现有的量化方法在比特宽度极度降低时会遭受严重的性能下降,因此主要集中在利用4比特或8比特的数值来量化模型。本文大胆地将LLM的权重矩阵量化为1比特,为LLM的极低比特宽度部署铺平了道路。为了实现这一目标,我们引入了一种名为OneBit的1比特量化感知训练(QAT)框架,其中包括一种新颖的1比特参数表示方法,以更好地量化LLM,以及基于矩阵分解的有效参数初始化方法,以提高QAT框架的收敛速度。充分的实验结果表明,OneBit在仅使用1比特权重矩阵时,能够实现良好的性能(至少达到非量化性能的83%),并具有稳健的训练过程。
English
Model quantification uses low bit-width values to represent the weight
matrices of models, which is a promising approach to reduce both storage and
computational overheads of deploying highly anticipated LLMs. However, existing
quantization methods suffer severe performance degradation when the bit-width
is extremely reduced, and thus focus on utilizing 4-bit or 8-bit values to
quantize models. This paper boldly quantizes the weight matrices of LLMs to
1-bit, paving the way for the extremely low bit-width deployment of LLMs. For
this target, we introduce a 1-bit quantization-aware training (QAT) framework
named OneBit, including a novel 1-bit parameter representation method to better
quantize LLMs as well as an effective parameter initialization method based on
matrix decomposition to improve the convergence speed of the QAT framework.
Sufficient experimental results indicate that OneBit achieves good performance
(at least 83% of the non-quantized performance) with robust training processes
when only using 1-bit weight matrices.Summary
AI-Generated Summary