专家级升维:推进混合专家模型的计算效率前沿
Expert Upcycling: Shifting the Compute-Efficient Frontier of Mixture-of-Experts
April 21, 2026
作者: Chaitanya Dwivedi, Binxuan Huang, Himanshu Gupta, Pratik Jayarao, Neeraj Varshney, Bing Yin
cs.AI
摘要
专家混合模型(MoE)已成为扩展大语言模型的主导架构:前沿模型通过稀疏专家路由机制,将总参数量与单令牌计算量解耦。扩展定律表明,在固定激活计算量下,模型质量随总参数量呈可预测增长,而MoE通过增加专家数量实现这一特性。然而训练大型MoE成本高昂,因为内存需求和设备间通信量均与总参数量成正比。我们提出专家升维技术,这是一种通过持续预训练(CPT)阶段增加专家数量来渐进扩展MoE容量的方法。给定已训练的E专家模型,升维算子通过专家复制和路由扩展构建mE专家模型,同时保持Top-K路由机制不变,从而维持单令牌推理成本。复制机制提供热初始化:扩展后的模型继承源检查点已学习的表征,其初始损失值远低于随机初始化。后续CPT会打破复制专家间的对称性以驱动专业化。我们形式化定义了升维算子,并建立理论框架将质量差距分解为容量项和初始化项。进一步提出基于效用的专家选择策略,利用梯度重要性评分指导非均匀复制,在CPT受限时将差距缩小幅度提升三倍以上。在总参数量为70亿至130亿的实验中,升维模型在验证损失上媲美固定规模基线,同时节省32%的GPU时耗。跨模型规模、激活比例、MoE架构和训练预算的全面消融实验形成了专家升维的实用方案,确立了其作为从头训练大型MoE模型的原理性、计算高效替代方法。
English
Mixture-of-Experts (MoE) has become the dominant architecture for scaling large language models: frontier models routinely decouple total parameters from per-token computation through sparse expert routing. Scaling laws show that under fixed active computation, model quality scales predictably with total parameters, and MoEs realize this by increasing expert count. However, training large MoEs is expensive, as memory requirements and inter-device communication both scale with total parameter count. We propose expert upcycling, a method for progressively expanding MoE capacity by increasing the number of experts during continued pre-training (CPT). Given a trained E-expert model, the upcycling operator constructs an mE-expert model through expert duplication and router extension while holding top-K routing fixed, preserving per-token inference cost. Duplication provides a warm initialization: the expanded model inherits the source checkpoint's learned representations, starting from a substantially lower loss than random initialization. Subsequent CPT then breaks the symmetry among duplicated experts to drive specialization. We formalize the upcycling operator and develop a theoretical framework decomposing the quality gap into a capacity term and an initialization term. We further introduce utility-based expert selection, which uses gradient-based importance scores to guide non-uniform duplication, more than tripling gap closure when CPT is limited. In our 7B-13B total parameter experiments, the upcycled model matches the fixed-size baseline on validation loss while saving 32% of GPU hours. Comprehensive ablations across model scales, activation ratios, MoE architectures, and training budgets yield a practical recipe for deploying expert upcycling, establishing it as a principled, compute-efficient alternative to training large MoE models from scratch.