最小负载专家并行:不平衡混合专家的负载均衡
Least-Loaded Expert Parallelism: Load Balancing An Imbalanced Mixture-of-Experts
January 23, 2026
作者: Xuan-Phi Nguyen, Shrey Pandit, Austin Xu, Caiming Xiong, Shafiq Joty
cs.AI
摘要
专家混合模型通常会在显式负载均衡约束下进行预训练,以确保统计上均衡的专家路由。然而我们发现,即使训练良好的MoE模型仍会呈现显著的路由不均衡现象。这种行为具有天然合理性——甚至可视为理想状态——因为不均衡路由能使模型将领域知识集中存储在部分专家中。专家并行技术旨在通过将专家分布到多个设备来实现MoE模型扩展,但其较少讨论的前提是路由均衡。在极端不均衡情况下,EP会将过量令牌集中到少数专家,导致训练后阶段或推理过程中过载设备出现计算和内存瓶颈(此时显式负载均衡往往难以实施)。我们提出最小负载专家并行算法,这种新型EP算法能动态地将过载设备的超额令牌及相关专家参数重路由至闲置设备。该方法在遵守内存约束的前提下,确保所有设备在最小集体延迟内完成计算负载。在不同规模模型测试中,LLEP相较标准EP实现了最高5倍加速和4倍峰值内存使用降低,其中gpt-oss-120b的训练后处理速度提升约1.9倍。我们通过理论分析和包含消融实验的实证评估验证该方法,这些结果揭示了关键权衡关系,并建立了针对特定硬件进行超参数调优的理论框架以实现最优性能。
English
Mixture-of-Experts (MoE) models are typically pre-trained with explicit load-balancing constraints to ensure statistically balanced expert routing. Despite this, we observe that even well-trained MoE models exhibit significantly imbalanced routing. This behavior is arguably natural-and even desirable - as imbalanced routing allows models to concentrate domain-specific knowledge within a subset of experts. Expert parallelism (EP) is designed to scale MoE models by distributing experts across multiple devices, but with a less-discussed assumption of balanced routing. Under extreme imbalance, EP can funnel a disproportionate number of tokens to a small number of experts, leading to compute- and memory-bound failures on overloaded devices during post-training or inference, where explicit load balancing is often inapplicable. We propose Least-Loaded Expert Parallelism (LLEP), a novel EP algorithm that dynamically reroutes excess tokens and associated expert parameters from overloaded devices to underutilized ones. This ensures that all devices complete their workloads within the minimum collective latency while respecting memory constraints. Across different model scales, LLEP achieves up to 5x speedup and 4x reduction in peak memory usage compared to standard EP. This enables faster and higher-throughput post-training and inference, with ~1.9x faster for gpt-oss-120b. We support our method with extensive theoretical analysis and comprehensive empirical evaluations, including ablation studies. These results illuminate key trade-offs and enable a principled framework for hardware-specific hyper-parameter tuning to achieve optimal performance.