ChatPaper.aiChatPaper

SQ格式:面向大语言模型的统一稀疏量化硬件友好数据格式

SQ-format: A Unified Sparse-Quantized Hardware-friendly Data Format for LLMs

December 5, 2025
作者: Ruixuan Huang, Hao Zeng, Hantao Huang, Jinyuan Shi, Minghui Yu, Ian En-Hsu Yen, Shuai Wang
cs.AI

摘要

训练后量化(PTQ)在大语言模型(LLM)的普及化进程中发挥着关键作用。然而,由于硬件支持有限,现有的低位宽量化和稀疏化技术难以平衡精度与效率。例如,W4A8配置仅能实现与W8A8相同的峰值TOPS,而GPU支持的稀疏数据格式(2:4半结构化稀疏)因精度损失问题鲜被采用。为弥补这一差距,本文提出稀疏量化格式(SQ-format),这是一种有望被新型硬件和现有GPU轻松支持的量化与稀疏化统一数据格式。该格式基于稀疏矩阵可在高精度下加速运算、而低精度矩阵乘法亦可相应加速的特性,旨在实现性能与吞吐量的帕累托改进。该格式特别适用于具有异常值不均匀分布的激活函数,并使其静态压缩成为可能。我们通过SQ-format展示了最先进的PTQ性能,提出了支持该格式的硬件需求,并进一步为下一代AI加速器提供设计探索与洞见。
English
Post-training quantization (PTQ) plays a crucial role in the democratization of large language models (LLMs). However, existing low-bit quantization and sparsification techniques are difficult to balance accuracy and efficiency due to the limited hardware support. For example, W4A8 can only achieve the same peak TOPS as W8A8 whereas the GPU-supported sparse data format (2:4 semi-structure sparse) is seldomly adopted due to the loss of accuracy. To bridge this gap, in this paper, we propose the Sparse-Quantized Format (SQ-format), which is a unified data format for quantization and sparsification potentially easily supported by new hardware and existing GPUs. SQ-format makes use of the fact that sparse matrix can be accelerated in high-precision, and low-precision matrix multiplication can also be accelerated accordingly. As such, SQ-format is proposed to achieve Pareto improvement between performance and throughput. This format is particularly suitable for activations with outlier inequality status and makes their static compression possible. We show the state-of-the-art PTQ performance with SQ-format, propose the hardware required to support it, and further offer the design exploration and insights for the next-generation AI accelerators.
PDF22December 9, 2025