最小负载专家并行:不平衡专家混合模型的负载均衡策略
Least-Loaded Expert Parallelism: Load Balancing An Imbalanced Mixture-of-Experts
January 23, 2026
作者: Xuan-Phi Nguyen, Shrey Pandit, Austin Xu, Caiming Xiong, Shafiq Joty
cs.AI
摘要
混合专家模型(MoE)通常会在预训练阶段采用显式负载均衡约束,以确保专家路由的统计平衡。然而我们发现,即使经过充分训练的MoE模型仍会呈现显著的路由不均衡现象。这种行为具有天然合理性——甚至可视为理想状态——因为不均衡路由能使模型将领域知识集中存储在部分专家子集中。专家并行(EP)技术通过将专家分布到多个设备来实现MoE模型扩展,但其设计隐含了路由均衡的前提假设。在极端不均衡场景下,EP会将不成比例的令牌流量导向少数专家,导致训练后阶段或推理过程中过载设备出现计算与内存瓶颈,而此时显式负载均衡往往已无法实施。我们提出最小负载专家并行(LLEP)算法,这种新型EP方案能将超额令牌及相关专家参数从过载设备动态重路由至闲置设备,在满足内存约束的前提下,确保所有设备以最小集体延迟完成计算负载。在不同规模模型测试中,LLEP相较于标准EP实现了最高5倍加速比和4倍峰值内存使用量降低,其中gpt-oss-120b模型的训练后处理速度提升约1.9倍。我们通过理论分析和包含消融实验的实证评估验证该方法,这些成果揭示了关键权衡关系,并建立了针对特定硬件进行超参数调优的理论框架,以实现最优性能。
English
Mixture-of-Experts (MoE) models are typically pre-trained with explicit load-balancing constraints to ensure statistically balanced expert routing. Despite this, we observe that even well-trained MoE models exhibit significantly imbalanced routing. This behavior is arguably natural-and even desirable - as imbalanced routing allows models to concentrate domain-specific knowledge within a subset of experts. Expert parallelism (EP) is designed to scale MoE models by distributing experts across multiple devices, but with a less-discussed assumption of balanced routing. Under extreme imbalance, EP can funnel a disproportionate number of tokens to a small number of experts, leading to compute- and memory-bound failures on overloaded devices during post-training or inference, where explicit load balancing is often inapplicable. We propose Least-Loaded Expert Parallelism (LLEP), a novel EP algorithm that dynamically reroutes excess tokens and associated expert parameters from overloaded devices to underutilized ones. This ensures that all devices complete their workloads within the minimum collective latency while respecting memory constraints. Across different model scales, LLEP achieves up to 5x speedup and 4x reduction in peak memory usage compared to standard EP. This enables faster and higher-throughput post-training and inference, with ~1.9x faster for gpt-oss-120b. We support our method with extensive theoretical analysis and comprehensive empirical evaluations, including ablation studies. These results illuminate key trade-offs and enable a principled framework for hardware-specific hyper-parameter tuning to achieve optimal performance.