SQ-format:面向大语言模型的统一稀疏量化硬件友好数据格式
SQ-format: A Unified Sparse-Quantized Hardware-friendly Data Format for LLMs
December 5, 2025
作者: Ruixuan Huang, Hao Zeng, Hantao Huang, Jinyuan Shi, Minghui Yu, Ian En-Hsu Yen, Shuai Wang
cs.AI
摘要
后训练量化(PTQ)在大语言模型(LLM)的普惠化进程中具有关键作用。然而,由于硬件支持有限,现有的低位量化和稀疏化技术难以平衡精度与效率。例如,W4A8配置仅能实现与W8A8相当的峰值TOPS,而GPU支持的稀疏数据格式(2:4半结构化稀疏)因精度损失问题鲜被采用。为弥合这一差距,本文提出稀疏量化格式(SQ格式)——一种适用于量化和稀疏化的统一数据格式,其具备新硬件与现有GPU的潜在易支持性。SQ格式基于以下原理:稀疏矩阵可采用高精度加速计算,而低精度矩阵乘法亦可相应加速。因此,SQ格式旨在实现性能与吞吐量的帕累托改进。该格式特别适用于具有异常值非均衡分布的激活张量,并使其静态压缩成为可能。我们展示了采用SQ格式的尖端PTQ性能,提出了支持该格式的硬件需求,并进一步为下一代AI加速器提供设计探索与洞见。
English
Post-training quantization (PTQ) plays a crucial role in the democratization of large language models (LLMs). However, existing low-bit quantization and sparsification techniques are difficult to balance accuracy and efficiency due to the limited hardware support. For example, W4A8 can only achieve the same peak TOPS as W8A8 whereas the GPU-supported sparse data format (2:4 semi-structure sparse) is seldomly adopted due to the loss of accuracy. To bridge this gap, in this paper, we propose the Sparse-Quantized Format (SQ-format), which is a unified data format for quantization and sparsification potentially easily supported by new hardware and existing GPUs. SQ-format makes use of the fact that sparse matrix can be accelerated in high-precision, and low-precision matrix multiplication can also be accelerated accordingly. As such, SQ-format is proposed to achieve Pareto improvement between performance and throughput. This format is particularly suitable for activations with outlier inequality status and makes their static compression possible. We show the state-of-the-art PTQ performance with SQ-format, propose the hardware required to support it, and further offer the design exploration and insights for the next-generation AI accelerators.