ChatPaper.aiChatPaper

層內循環路由器用於專家混合模型

Layerwise Recurrent Router for Mixture-of-Experts

August 13, 2024
作者: Zihan Qiu, Zeyu Huang, Shuang Cheng, Yizhi Zhou, Zili Wang, Ivan Titov, Jie Fu
cs.AI

摘要

大型語言模型(LLMs)的擴展已經在各種任務中實現了革命性的能力,然而這種增長必須與高效的計算策略相匹配。混合專家(MoE)架構以其能夠在不顯著增加訓練成本的情況下擴展模型大小的能力脫穎而出。儘管具有優勢,但目前的MoE模型通常存在參數效率低的問題。例如,具有 520 億參數的預訓練 MoE-based LLM 可能與具有 67 億參數的標準模型表現相當。作為 MoE 的關鍵部分,目前不同層中的路由器獨立分配標記,而不利用歷史路由信息,可能導致次優的標記-專家組合和參數效率問題。為了緩解這個問題,我們引入了用於混合專家的層內循環路由器(RMoE)。RMoE 利用閘控循環單元(GRU)在連續層之間建立路由決策之間的依賴關係。這種層內循環可以有效並行計算輸入標記,並引入可協商的成本。我們的廣泛實證評估表明,基於 RMoE 的語言模型始終優於各種基準模型。此外,RMoE 集成了一個與現有方法正交的新型計算階段,使其與其他 MoE 架構無縫兼容。我們的分析將 RMoE 的增益歸因於其有效的跨層信息共享,這也改善了專家選擇和多樣性。我們的代碼位於 https://github.com/qiuzh20/RMoE
English
The scaling of large language models (LLMs) has revolutionized their capabilities in various tasks, yet this growth must be matched with efficient computational strategies. The Mixture-of-Experts (MoE) architecture stands out for its ability to scale model size without significantly increasing training costs. Despite their advantages, current MoE models often display parameter inefficiency. For instance, a pre-trained MoE-based LLM with 52 billion parameters might perform comparably to a standard model with 6.7 billion parameters. Being a crucial part of MoE, current routers in different layers independently assign tokens without leveraging historical routing information, potentially leading to suboptimal token-expert combinations and the parameter inefficiency problem. To alleviate this issue, we introduce the Layerwise Recurrent Router for Mixture-of-Experts (RMoE). RMoE leverages a Gated Recurrent Unit (GRU) to establish dependencies between routing decisions across consecutive layers. Such layerwise recurrence can be efficiently parallelly computed for input tokens and introduces negotiable costs. Our extensive empirical evaluations demonstrate that RMoE-based language models consistently outperform a spectrum of baseline models. Furthermore, RMoE integrates a novel computation stage orthogonal to existing methods, allowing seamless compatibility with other MoE architectures. Our analyses attribute RMoE's gains to its effective cross-layer information sharing, which also improves expert selection and diversity. Our code is at https://github.com/qiuzh20/RMoE

Summary

AI-Generated Summary

PDF332November 28, 2024