ChatPaper.aiChatPaper

LLM量化幾何學:GPTQ作為Babai最近平面算法

The Geometry of LLM Quantization: GPTQ as Babai's Nearest Plane Algorithm

July 24, 2025
作者: Jiale Chen, Torsten Hoefler, Dan Alistarh
cs.AI

摘要

將大型語言模型(LLMs)的權重從16位元量化至更低位元,是將龐大變壓器模型部署到更經濟實惠加速器上的實際做法。GPTQ作為LLM規模下一次訓練後量化的標準方法之一應運而生。然而,其內部運作被描述為一系列臨時代數更新,掩蓋了任何幾何意義或最壞情況下的保證。在本研究中,我們證明,當對線性層從後向前(從最後一個維度到第一個維度)執行時,GPTQ在數學上等同於Babai針對由層輸入的Hessian矩陣定義的格上經典最近向量問題(CVP)的最近平面算法。這一等價性基於一個精妙的數學論證,並帶來兩個分析性後果:(i) GPTQ的誤差傳播步驟獲得了直觀的幾何解釋;(ii) 在無裁剪條件下,GPTQ繼承了Babai算法的誤差上限。綜合來看,這些結果將GPTQ置於堅實的理論基礎之上,並為未來針對數十億參數模型的量化算法設計引入數十年格算法進展打開了大門。
English
Quantizing the weights of large language models (LLMs) from 16-bit to lower bitwidth is the de facto approach to deploy massive transformers onto more affordable accelerators. GPTQ emerged as one of the standard methods for one-shot post-training quantization at LLM scale. Yet, its inner workings are described as a sequence of ad-hoc algebraic updates that obscure any geometric meaning or worst-case guarantees. In this work, we show that, when executed back-to-front (from the last to first dimension) for a linear layer, GPTQ is mathematically identical to Babai's nearest plane algorithm for the classical closest vector problem (CVP) on a lattice defined by the Hessian matrix of the layer's inputs. This equivalence is based on a sophisticated mathematical argument, and has two analytical consequences: (i) the GPTQ error propagation step gains an intuitive geometric interpretation; (ii) GPTQ inherits the error upper bound of Babai's algorithm under the no-clipping condition. Taken together, these results place GPTQ on firm theoretical footing and open the door to importing decades of progress in lattice algorithms towards the design of future quantization algorithms for billion-parameter models.
PDF332July 28, 2025