OmniQuant:面向大型语言模型的全向校准量化
OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models
August 25, 2023
作者: Wenqi Shao, Mengzhao Chen, Zhaoyang Zhang, Peng Xu, Lirui Zhao, Zhiqian Li, Kaipeng Zhang, Peng Gao, Yu Qiao, Ping Luo
cs.AI
摘要
大型语言模型(LLMs)已经彻底改变了自然语言处理任务。然而,它们的实际部署受到了巨大的内存和计算需求的阻碍。尽管最近的后训练量化(PTQ)方法在减少内存占用和提高LLM的计算效率方面非常有效,但它们手工制定量化参数,导致性能低下,并且无法处理极低比特量化。为了解决这个问题,我们引入了一种全方位校准量化(OmniQuant)技术,用于LLMs,它在各种量化设置中取得良好性能,同时通过有效优化各种量化参数来保持PTQ的计算效率。OmniQuant包括两个创新组件,包括可学习权重剪切(LWC)和可学习等效变换(LET)。LWC通过优化剪切阈值来调节权重的极端值。与此同时,LET通过可学习等效变换将量化的挑战从激活转移到权重,以解决激活异常值的问题。在可微分框架内运行,通过分块误差最小化,OmniQuant可以有效地为仅权重和权重-激活量化优化量化过程。例如,大小为7-70B的LLaMA-2模型系列可以在单个A100-40G GPU上使用128个样本在1-16小时内通过OmniQuant处理。大量实验证实了OmniQuant在各种量化配置(如W4A4、W6A6、W4A16、W3A16和W2A16)中的卓越性能。此外,OmniQuant在经过指令调整的模型中表现出有效性,并在实际设备上提高了推理速度和减少内存占用。代码和模型可在https://github.com/OpenGVLab/OmniQuant 上找到。
English
Large language models (LLMs) have revolutionized natural language processing
tasks. However, their practical deployment is hindered by their immense memory
and computation requirements. Although recent post-training quantization (PTQ)
methods are effective in reducing memory footprint and improving the
computational efficiency of LLM, they hand-craft quantization parameters, which
leads to low performance and fails to deal with extremely low-bit quantization.
To tackle this issue, we introduce an Omnidirectionally calibrated Quantization
(OmniQuant) technique for LLMs, which achieves good performance in diverse
quantization settings while maintaining the computational efficiency of PTQ by
efficiently optimizing various quantization parameters. OmniQuant comprises two
innovative components including Learnable Weight Clipping (LWC) and Learnable
Equivalent Transformation (LET). LWC modulates the extreme values of weights by
optimizing the clipping threshold. Meanwhile, LET tackles activation outliers
by shifting the challenge of quantization from activations to weights through a
learnable equivalent transformation. Operating within a differentiable
framework using block-wise error minimization, OmniQuant can optimize the
quantization process efficiently for both weight-only and weight-activation
quantization. For instance, the LLaMA-2 model family with the size of 7-70B can
be processed with OmniQuant on a single A100-40G GPU within 1-16 hours using
128 samples. Extensive experiments validate OmniQuant's superior performance
across diverse quantization configurations such as W4A4, W6A6, W4A16, W3A16,
and W2A16. Additionally, OmniQuant demonstrates effectiveness in
instruction-tuned models and delivers notable improvements in inference speed
and memory reduction on real devices. Codes and models are available at
https://github.com/OpenGVLab/OmniQuant.