ChatPaper.aiChatPaper

DOTResize:基于离散最优传输的神经元合并实现大语言模型宽度缩减

DOTResize: Reducing LLM Width via Discrete Optimal Transport-based Neuron Merging

July 6, 2025
作者: Neha Verma, Kenton Murray, Kevin Duh
cs.AI

摘要

模型压缩为降低大型预训练模型的成本与不可及性提供了一条前景广阔的路径,同时无需显著折损其卓越性能。大型Transformer模型,包括大规模语言模型(LLMs),常蕴含计算冗余,这为新模型压缩方法提供了优化目标。本研究中,我们特别针对模型层中的神经元级冗余,通过将相似神经元群合并为更少神经元来实现压缩。我们将这种宽度缩减问题框架化为离散最优传输问题,并提出了DOTResize,一种利用最优传输理论来转换和压缩模型权重的新型Transformer压缩方法。为确保在Transformer架构中的适用性,我们引入并整合了熵正则化与矩阵分解至本方法生成的传输映射中。与基于重要性度量丢弃神经元的剪枝方法不同,DOTResize重新投影整个神经元宽度,使得在缩减层中能够保留并重新分配有用信号。实证结果表明,相较于简单或先进的神经元宽度剪枝技术,DOTResize在多个LLM家族及规模上均能超越这些方法,同时在实际计算成本上实现了可观的降低。
English
Model compression offers a promising path to reducing the cost and inaccessibility of large pre-trained models, without significantly compromising their impressive performance. Large Transformer models, including large language models (LLMs), often contain computational redundancy, which can serve as a target for new model compression methods. In this work, we specifically target neuron-level redundancies in model layers by combining groups of similar neurons into fewer neurons. We frame this width reduction as a Discrete Optimal Transport problem, and propose DOTResize, a novel Transformer compression method that uses optimal transport theory to transform and compress model weights. To ensure applicability within the Transformer architecture, we motivate and incorporate entropic regularization and matrix factorization into the transportation maps produced by our method. Unlike pruning-based approaches which discard neurons based on importance measures, DOTResize re-projects the entire neuron width, allowing the retention and redistribution of useful signal across the reduced layer. Empirical results show that compared to simple or state-of-the-art neuron width-pruning techniques, DOTResize can outperform these methods across multiple LLM families and sizes, while achieving measurable reductions in real-world computational cost.
PDF11July 14, 2025