ChatPaper.aiChatPaper

SINQ:基於Sinkhorn歸一化的量化技術,實現無校準低精度大語言模型權重

SINQ: Sinkhorn-Normalized Quantization for Calibration-Free Low-Precision LLM Weights

September 26, 2025
作者: Lorenz K. Müller, Philippe Bich, Jiawei Zhuang, Ahmet Çelik, Luca Benfenati, Lukas Cavigelli
cs.AI

摘要

訓練後量化已成為在低精度下部署大型語言模型最廣泛採用的策略。然而,當前的方法在比特寬度小於或等於4時顯示出困惑度下降,部分原因是表示異常值會導致與這些異常值共享相同尺度的參數出現精度問題。這一問題在校準無需的均勻量化方法中尤為顯著。我們引入了SINQ,通過增加一個額外的第二軸尺度因子和一種快速的Sinkhorn-Knopp式算法來增強現有的訓練後量化器,該算法找到尺度以規範化每行和每列的方差,從而最小化一個新穎的每矩陣代理量化目標:矩陣不平衡。我們的方法在層之間沒有交互作用,並且可以輕鬆應用於新架構以量化任何線性層。我們在Qwen3模型家族和DeepSeek-V2.5上評估了我們的方法。SINQ顯著改善了WikiText2和C4的困惑度,相較於未校準的均勻量化基線,並且可以通過與校準和非均勻量化級別結合進一步增強。重現本工作結果及使用SINQ輕鬆量化模型的代碼可在https://github.com/huawei-csl/SINQ獲取。
English
Post-training quantization has emerged as the most widely used strategy for deploying large language models at low precision. Still, current methods show perplexity degradation at bit-widths less than or equal to 4, partly because representing outliers causes precision issues in parameters that share the same scales as these outliers. This problem is especially pronounced for calibration-free, uniform quantization methods. We introduce SINQ to augment existing post-training quantizers with an additional second-axis scale factor and a fast Sinkhorn-Knopp-style algorithm that finds scales to normalize per-row and per-column variances, thereby minimizing a novel per-matrix proxy target for quantization: the matrix imbalance. Our method has no interactions between layers and can be trivially applied to new architectures to quantize any linear layers. We evaluate our method on the Qwen3 model family and DeepSeek-V2.5. SINQ improves WikiText2 and C4 perplexity significantly against uncalibrated uniform quantization baselines and can be further enhanced by combining it with calibration and non-uniform quantization levels. Code to reproduce the results of this work and to easily quantize models using SINQ is available at https://github.com/huawei-csl/SINQ.
PDF735October 2, 2025