ChatPaper.aiChatPaper

FlatQuant:对于LMM量化,平坦性很重要

FlatQuant: Flatness Matters for LLM Quantization

October 12, 2024
作者: Yuxuan Sun, Ruikang Liu, Haoli Bai, Han Bao, Kang Zhao, Yuening Li, Jiaxin Hu, Xianzhi Yu, Lu Hou, Chun Yuan, Xin Jiang, Wulong Liu, Jun Yao
cs.AI

摘要

最近,量化技术被广泛应用于压缩和加速大型语言模型(LLMs)。由于LLMs中存在离群值,将权重和激活值展平以减小量化误差与等间距量化点至关重要。先前的研究探索了各种预量化转换方法来抑制离群值,例如按通道缩放和Hadamard变换。然而,我们观察到这些转换后的权重和激活值仍可能保持陡峭且分散。在本文中,我们提出了FlatQuant(快速且可学习的仿射变换),这是一种新的后训练量化方法,旨在增强权重和激活值的平坦性。我们的方法识别了针对每个线性层量身定制的最佳仿射变换,通过轻量级目标在几小时内进行校准。为了减少运行时开销,我们将Kronecker分解应用于转换矩阵,并将FlatQuant中的所有操作融合为单个核。大量实验证明,FlatQuant建立了一个新的最先进的量化基准。例如,在LLaMA-3-70B模型上进行W4A4量化时,其准确率仅下降不到1%,超过SpinQuant 7.5%。对于推理延迟,FlatQuant将由预量化转换引起的减速从QuaRot的0.26倍降低到仅0.07倍,分别带来预填充2.3倍加速和解码1.7倍加速。代码可在以下网址获取:https://github.com/ruikangliu/FlatQuant。
English
Recently, quantization has been widely used for the compression and acceleration of large language models~(LLMs). Due to the outliers in LLMs, it is crucial to flatten weights and activations to minimize quantization error with the equally spaced quantization points. Prior research explores various pre-quantization transformations to suppress outliers, such as per-channel scaling and Hadamard transformation. However, we observe that these transformed weights and activations can still remain steep and outspread. In this paper, we propose FlatQuant (Fast and Learnable Affine Transformation), a new post-training quantization approach to enhance flatness of weights and activations. Our approach identifies optimal affine transformations tailored to each linear layer, calibrated in hours via a lightweight objective. To reduce runtime overhead, we apply Kronecker decomposition to the transformation matrices, and fuse all operations in FlatQuant into a single kernel. Extensive experiments show that FlatQuant sets up a new state-of-the-art quantization benchmark. For instance, it achieves less than 1% accuracy drop for W4A4 quantization on the LLaMA-3-70B model, surpassing SpinQuant by 7.5%. For inference latency, FlatQuant reduces the slowdown induced by pre-quantization transformation from 0.26x of QuaRot to merely 0.07x, bringing up to 2.3x speedup for prefill and 1.7x speedup for decoding, respectively. Code is available at: https://github.com/ruikangliu/FlatQuant.

Summary

AI-Generated Summary

PDF152November 16, 2024