COSPADI:基于校准引导的稀疏字典学习压缩大语言模型
COSPADI: Compressing LLMs via Calibration-Guided Sparse Dictionary Learning
September 26, 2025
作者: Dmitriy Shopkhoev, Denis Makhov, Magauiya Zhussip, Ammar Ali, Stamatios Lefkimmiatis
cs.AI
摘要
大型语言模型(LLMs)的训练后压缩主要依赖于低秩权重近似,即将权重矩阵的每一列表示在一个共享的低维子空间中。尽管这是一种计算效率高的策略,但所施加的结构约束较为僵化,可能导致模型精度显著下降。在本研究中,我们提出了CoSpaDi(通过稀疏字典学习进行压缩),这是一种无需重新训练的新型压缩框架,它用更灵活的结构化稀疏分解替代了低秩分解,其中每个权重矩阵由一个密集字典和一个列稀疏系数矩阵表示。这种表述实现了子空间联合表示:原始权重矩阵的不同列在由自适应选择的字典原子张成的不同子空间中进行近似,相比单一不变基,提供了更强的表达能力。关键在于,CoSpaDi利用一个小型校准数据集优化分解,使得压缩后的投影层输出激活与原始层高度一致,从而最小化功能重构误差而非单纯的权重近似。这种数据感知策略在合理的压缩比下,无需微调即可更好地保持模型保真度。此外,由此产生的结构化稀疏性支持高效的稀疏-稠密矩阵乘法,并与训练后量化兼容,进一步节省内存和降低延迟。我们在20-50%压缩比下,针对Llama和Qwen系列模型,在逐层和逐组设置中评估了CoSpaDi,结果显示其在准确性和困惑度上均优于当前最先进的数据感知低秩方法。我们的研究结果确立了结构化稀疏字典学习作为传统低秩方法的有力替代,为高效部署LLM提供了新途径。
English
Post-training compression of large language models (LLMs) largely relies on
low-rank weight approximation, which represents each column of a weight matrix
in a shared low-dimensional subspace. While this is a computationally efficient
strategy, the imposed structural constraint is rigid and can lead to a
noticeable model accuracy drop. In this work, we propose CoSpaDi (Compression
via Sparse Dictionary Learning), a novel training-free compression framework
that replaces low-rank decomposition with a more flexible structured sparse
factorization in which each weight matrix is represented with a dense
dictionary and a column-sparse coefficient matrix. This formulation enables a
union-of-subspaces representation: different columns of the original weight
matrix are approximated in distinct subspaces spanned by adaptively selected
dictionary atoms, offering greater expressiveness than a single invariant
basis. Crucially, CoSpaDi leverages a small calibration dataset to optimize the
factorization such that the output activations of compressed projection layers
closely match those of the original ones, thereby minimizing functional
reconstruction error rather than mere weight approximation. This data-aware
strategy preserves better model fidelity without any fine-tuning under
reasonable compression ratios. Moreover, the resulting structured sparsity
allows efficient sparse-dense matrix multiplication and is compatible with
post-training quantization for further memory and latency gains. We evaluate
CoSpaDi across multiple Llama and Qwen models under per-layer and per-group
settings at 20-50\% compression ratios, demonstrating consistent superiority
over state-of-the-art data-aware low-rank methods both in accuracy and
perplexity. Our results establish structured sparse dictionary learning as a
powerful alternative to conventional low-rank approaches for efficient LLM
deployment.