ChatPaper.aiChatPaper

细粒度专家混合的规模律

Scaling Laws for Fine-Grained Mixture of Experts

February 12, 2024
作者: Jakub Krajewski, Jan Ludziejewski, Kamil Adamczewski, Maciej Pióro, Michał Krutul, Szymon Antoniak, Kamil Ciebiera, Krystian Król, Tomasz Odrzygóźdź, Piotr Sankowski, Marek Cygan, Sebastian Jaszczur
cs.AI

摘要

混合专家(MoE)模型已成为降低大型语言模型计算成本的主要解决方案。在这项工作中,我们分析了它们的扩展性质,包括了更广泛的变量。具体地,我们引入了一个新的超参数,即粒度,通过调整它可以精确控制专家的大小。在此基础上,我们建立了适用于细粒度MoE的扩展规律,考虑了训练标记的数量、模型大小和粒度。利用这些规律,我们为给定计算预算推导出了最佳训练配置。我们的研究结果不仅表明,MoE模型始终优于密集Transformer,还突出了密集和MoE模型之间的效率差距随着模型大小和训练预算的扩大而加大。此外,我们证明了在几乎任何计算预算下,将MoE中专家的大小设置为与前馈层相同的常见做法并非最佳选择。
English
Mixture of Experts (MoE) models have emerged as a primary solution for reducing the computational cost of Large Language Models. In this work, we analyze their scaling properties, incorporating an expanded range of variables. Specifically, we introduce a new hyperparameter, granularity, whose adjustment enables precise control over the size of the experts. Building on this, we establish scaling laws for fine-grained MoE, taking into account the number of training tokens, model size, and granularity. Leveraging these laws, we derive the optimal training configuration for a given computational budget. Our findings not only show that MoE models consistently outperform dense Transformers but also highlight that the efficiency gap between dense and MoE models widens as we scale up the model size and training budget. Furthermore, we demonstrate that the common practice of setting the size of experts in MoE to mirror the feed-forward layer is not optimal at almost any computational budget.
PDF141December 15, 2024