專家混合模型的μ參數化
μ-Parametrization for Mixture of Experts
August 13, 2025
作者: Jan Małaśnicki, Kamil Ciebiera, Mateusz Boruń, Maciej Pióro, Jan Ludziejewski, Maciej Stefaniak, Michał Krutul, Sebastian Jaszczur, Marek Cygan, Kamil Adamczewski, Jakub Krajewski
cs.AI
摘要
近年來,大型語言模型(LLMs)的關注度與應用日益增長,其中muTransfer技術已成為大規模訓練中超參數調優的關鍵手段。與此同時,混合專家模型(Mixture-of-Experts, MoE)在極大規模模型中嶄露頭角,成為領先的架構。然而,這兩項技術的交集尚未被深入探索。在本研究中,我們為MoE推導出一種mu參數化(mu-Parameterization, muP)方法,為路由器和專家模型中跨模型寬度的特徵學習提供了理論保證。我們通過實驗驗證了這一參數化方法,並進一步探討了專家數量與細粒度擴展如何影響最佳學習率。
English
Recent years have seen a growing interest and adoption of LLMs, with
muTransfer becoming a key technique for tuning hyperparameters in
large-scale training. Meanwhile, Mixture-of-Experts (MoE) has emerged as a
leading architecture in extremely large models. However, the intersection of
these two advancements has remained unexplored. In this work, we derive a
mu-Parameterization (muP) for MoE, providing theoretical guarantees for
feature learning across model widths in both the router and experts. We
empirically validate our parameterization and further investigate how scaling
the number of experts and granularity affects the optimal learning rate.