ChatPaper.aiChatPaper

LLM-FP4:4 位浮点量化 Transformer

LLM-FP4: 4-Bit Floating-Point Quantized Transformers

October 25, 2023
作者: Shih-yang Liu, Zechun Liu, Xijie Huang, Pingcheng Dong, Kwang-Ting Cheng
cs.AI

摘要

我们提出了LLM-FP4,用于将大型语言模型(LLMs)中的权重和激活量化为4位浮点值,采用后训练方式。现有的后训练量化(PTQ)解决方案主要基于整数,并且在比8位更低的位宽下存在困难。与整数量化相比,浮点(FP)量化更加灵活,可以更好地处理长尾或钟形分布,并已成为许多硬件平台的默认选择。FP量化的一个特点是,其性能在很大程度上取决于指数位和剪切范围的选择。在这方面,我们通过寻找最佳量化参数构建了一个强大的FP-PTQ基线。此外,我们观察到激活分布中存在高通道间方差和低通道内方差的模式,增加了激活量化的难度。我们认识到这种模式在设计用于不同任务的变压器模型(如LLMs、BERT和Vision Transformer模型)的光谱中是一致的。为了解决这个问题,我们提出了按通道激活量化,并展示这些额外的缩放因子可以重新参数化为权重的指数偏置,带来可忽略的成本。我们的方法首次可以将LLaMA-13B中的权重和激活量化为仅4位,并在常识零样本推理任务上实现了63.1的平均得分,仅比全精度模型低5.8分,明显优于先前的最先进技术12.7分。代码可在以下网址找到:https://github.com/nbasyl/LLM-FP4。
English
We propose LLM-FP4 for quantizing both weights and activations in large language models (LLMs) down to 4-bit floating-point values, in a post-training manner. Existing post-training quantization (PTQ) solutions are primarily integer-based and struggle with bit widths below 8 bits. Compared to integer quantization, floating-point (FP) quantization is more flexible and can better handle long-tail or bell-shaped distributions, and it has emerged as a default choice in many hardware platforms. One characteristic of FP quantization is that its performance largely depends on the choice of exponent bits and clipping range. In this regard, we construct a strong FP-PTQ baseline by searching for the optimal quantization parameters. Furthermore, we observe a high inter-channel variance and low intra-channel variance pattern in activation distributions, which adds activation quantization difficulty. We recognize this pattern to be consistent across a spectrum of transformer models designed for diverse tasks, such as LLMs, BERT, and Vision Transformer models. To tackle this, we propose per-channel activation quantization and show that these additional scaling factors can be reparameterized as exponential biases of weights, incurring a negligible cost. Our method, for the first time, can quantize both weights and activations in the LLaMA-13B to only 4-bit and achieves an average score of 63.1 on the common sense zero-shot reasoning tasks, which is only 5.8 lower than the full-precision model, significantly outperforming the previous state-of-the-art by 12.7 points. Code is available at: https://github.com/nbasyl/LLM-FP4.
PDF140December 15, 2024