ChatPaper.aiChatPaper

DOTResize:基於離散最優輸送的神經元合併以縮減大型語言模型寬度

DOTResize: Reducing LLM Width via Discrete Optimal Transport-based Neuron Merging

July 6, 2025
作者: Neha Verma, Kenton Murray, Kevin Duh
cs.AI

摘要

模型压缩提供了一条有前景的路径,旨在降低大型预训练模型的成本与难以接近性,而不显著损害其卓越性能。大型Transformer模型,包括大规模语言模型(LLMs),常含有计算冗余,这为新型模型压缩方法提供了目标。本研究中,我们特别针对模型层中的神经元级冗余,通过将相似神经元群组合为更少的神经元来实现压缩。我们将这种宽度缩减问题框架化为离散最优传输问题,并提出了DOTResize,一种利用最优传输理论来转换和压缩模型权重的新型Transformer压缩方法。为确保在Transformer架构内的适用性,我们引入并整合了熵正则化与矩阵分解至本方法生成的传输映射中。与基于剪枝的方法不同,后者依据重要性度量丢弃神经元,DOTResize则重新投影整个神经元宽度,允许在缩减层中保留并重新分配有用信号。实证结果表明,相较于简单或先进的神经元宽度剪枝技术,DOTResize在多个LLM家族及规模上均能超越这些方法,同时在实际计算成本上实现了可观的降低。
English
Model compression offers a promising path to reducing the cost and inaccessibility of large pre-trained models, without significantly compromising their impressive performance. Large Transformer models, including large language models (LLMs), often contain computational redundancy, which can serve as a target for new model compression methods. In this work, we specifically target neuron-level redundancies in model layers by combining groups of similar neurons into fewer neurons. We frame this width reduction as a Discrete Optimal Transport problem, and propose DOTResize, a novel Transformer compression method that uses optimal transport theory to transform and compress model weights. To ensure applicability within the Transformer architecture, we motivate and incorporate entropic regularization and matrix factorization into the transportation maps produced by our method. Unlike pruning-based approaches which discard neurons based on importance measures, DOTResize re-projects the entire neuron width, allowing the retention and redistribution of useful signal across the reduced layer. Empirical results show that compared to simple or state-of-the-art neuron width-pruning techniques, DOTResize can outperform these methods across multiple LLM families and sizes, while achieving measurable reductions in real-world computational cost.
PDF11July 14, 2025