SlimMoE:通过专家精简与蒸馏实现大规模MoE模型的结构化压缩
SlimMoE: Structured Compression of Large MoE Models via Expert Slimming and Distillation
June 23, 2025
作者: Zichong Li, Chen Liang, Zixuan Zhang, Ilgee Hong, Young Jin Kim, Weizhu Chen, Tuo Zhao
cs.AI
摘要
专家混合(MoE)架构已成为扩展大型语言模型(LLMs)同时保持推理效率的强大范式。然而,其巨大的内存需求使得在资源受限的环境中进行微调或部署变得极其昂贵。为解决这一挑战,我们引入了SlimMoE,一种多阶段压缩框架,用于将大型MoE模型转化为更小、更高效的变体,而无需承担从头训练的高昂成本。我们的方法通过精简专家并通过中间阶段转移知识,系统地减少参数数量,有效缓解了一次性剪枝方法中常见的性能下降问题。利用该框架,我们仅使用400B tokens(不到原始模型训练数据的10%)将Phi 3.5-MoE(总计41.9B/激活6.6B参数)压缩为Phi-mini-MoE(总计7.6B/激活2.4B参数)和Phi-tiny-MoE(总计3.8B/激活1.1B参数)。这些压缩模型可在单GPU(Phi-mini-MoE使用A100,Phi-tiny-MoE使用A6000)上进行微调,非常适合学术和资源有限的环境。实验表明,这些压缩模型在相似规模下表现优异,并与更大模型保持竞争力。例如,Phi-mini-MoE仅使用2/3的激活参数便达到或超越了Phi-3-mini的性能,并在显著降低延迟的情况下,获得了与Llama 3.1 8B相当的MMLU分数。我们的研究证明,结构化剪枝结合分阶段蒸馏为创建高质量、紧凑的MoE模型提供了有效途径,推动了MoE架构的广泛应用。我们已将模型公开发布于https://huggingface.co/microsoft/Phi-mini-MoE-instruct 和 https://huggingface.co/microsoft/Phi-tiny-MoE-instruct。
English
The Mixture of Experts (MoE) architecture has emerged as a powerful paradigm
for scaling large language models (LLMs) while maintaining inference
efficiency. However, their enormous memory requirements make them prohibitively
expensive to fine-tune or deploy in resource-constrained environments. To
address this challenge, we introduce SlimMoE, a multi-stage compression
framework for transforming large MoE models into much smaller, efficient
variants without incurring the prohibitive costs of training from scratch. Our
method systematically reduces parameter counts by slimming experts and
transferring knowledge through intermediate stages, effectively mitigating the
performance degradation common in one-shot pruning approaches. Using this
framework, we compress Phi 3.5-MoE (41.9B total/6.6B activated parameters) to
create Phi-mini-MoE (7.6B total/2.4B activated parameters) and Phi-tiny-MoE
(3.8B total/1.1B activated parameters) using only 400B tokens--less than 10% of
the original model's training data. These compressed models can be fine-tuned
on a single GPU (A100 for Phi-mini-MoE, A6000 for Phi-tiny-MoE), making them
highly suitable for academic and resource-limited settings. Our experiments
demonstrate that these compressed models outperform others of similar size and
remain competitive with larger models. For instance, Phi-mini-MoE achieves
similar or better performance to Phi-3-mini using only 2/3 of the activated
parameters and yields comparable MMLU scores to Llama 3.1 8B despite having
significantly lower latency. Our findings demonstrate that structured pruning
combined with staged distillation offers an effective path to creating
high-quality, compact MoE models, paving the way for broader adoption of MoE
architectures. We make our models publicly available at
https://huggingface.co/microsoft/Phi-mini-MoE-instruct and
https://huggingface.co/microsoft/Phi-tiny-MoE-instruct .