ChatPaper.aiChatPaper

大语言模型量化的几何学:GPTQ作为巴拜最近平面算法

The Geometry of LLM Quantization: GPTQ as Babai's Nearest Plane Algorithm

July 24, 2025
作者: Jiale Chen, Torsten Hoefler, Dan Alistarh
cs.AI

摘要

将大型语言模型(LLMs)的权重从16位量化至更低比特宽度,是部署大规模Transformer模型到更具成本效益加速器上的实际做法。GPTQ已成为LLM规模下一次性训练后量化的标准方法之一。然而,其内部机制被描述为一系列临时性的代数更新,掩盖了任何几何意义或最坏情况下的保证。在本研究中,我们证明,当对线性层从后向前(即从最后一维到第一维)执行时,GPTQ在数学上等同于Babai最近平面算法,用于解决由层输入的海森矩阵定义的经典最近向量问题(CVP)。这一等价性基于一个复杂的数学论证,并带来两个分析性结论:(i) GPTQ误差传播步骤获得了直观的几何解释;(ii) 在无裁剪条件下,GPTQ继承了Babai算法的误差上界。综合来看,这些结果为GPTQ奠定了坚实的理论基础,并为借鉴数十年晶格算法进展以设计未来面向十亿参数模型的量化算法打开了大门。
English
Quantizing the weights of large language models (LLMs) from 16-bit to lower bitwidth is the de facto approach to deploy massive transformers onto more affordable accelerators. GPTQ emerged as one of the standard methods for one-shot post-training quantization at LLM scale. Yet, its inner workings are described as a sequence of ad-hoc algebraic updates that obscure any geometric meaning or worst-case guarantees. In this work, we show that, when executed back-to-front (from the last to first dimension) for a linear layer, GPTQ is mathematically identical to Babai's nearest plane algorithm for the classical closest vector problem (CVP) on a lattice defined by the Hessian matrix of the layer's inputs. This equivalence is based on a sophisticated mathematical argument, and has two analytical consequences: (i) the GPTQ error propagation step gains an intuitive geometric interpretation; (ii) GPTQ inherits the error upper bound of Babai's algorithm under the no-clipping condition. Taken together, these results place GPTQ on firm theoretical footing and open the door to importing decades of progress in lattice algorithms towards the design of future quantization algorithms for billion-parameter models.
PDF332July 28, 2025