線性專家混合模型:線性序列建模與專家混合的結合
Linear-MoE: Linear Sequence Modeling Meets Mixture-of-Experts
March 7, 2025
作者: Weigao Sun, Disen Lan, Tong Zhu, Xiaoye Qu, Yu Cheng
cs.AI
摘要
線性序列建模(Linear Sequence Modeling, LSM),如線性注意力、狀態空間模型和線性循環神經網絡,以及專家混合模型(Mixture-of-Experts, MoE)近期已成為重要的架構改進。本文中,我們介紹了Linear-MoE,這是一個生產級系統,用於建模和訓練將LSM與MoE相結合的大規模模型。Linear-MoE充分利用了LSM模組在線性複雜度序列建模上的優勢,以及MoE層在稀疏激活上的特點,旨在實現高效訓練下的高性能。Linear-MoE系統包含:1)建模子系統,提供一個統一框架,支持所有LSM實例;2)訓練子系統,通過整合多種先進的並行技術,特別是為Linear-MoE模型設計的序列並行技術,來促進高效訓練。此外,我們探索了將Linear-MoE層與標準Transformer-MoE層及其序列並行技術相結合的混合模型,以進一步提升模型的靈活性和性能。在A0.3B-2B和A1B-7B兩個模型系列上的評估表明,Linear-MoE在保持各項基準測試競爭力的同時,實現了效率提升,展示了其作為下一代基礎模型架構的潛力。代碼:https://github.com/OpenSparseLLMs/Linear-MoE。
English
Linear Sequence Modeling (LSM) like linear attention, state space models and
linear RNNs, and Mixture-of-Experts (MoE) have recently emerged as significant
architectural improvements. In this paper, we introduce Linear-MoE, a
production-level system for modeling and training large-scale models that
integrate LSM with MoE. Linear-MoE leverages the advantages of both LSM modules
for linear-complexity sequence modeling and MoE layers for sparsely activation,
aiming to offer high performance with efficient training. The Linear-MoE system
comprises: 1) Modeling subsystem, which provides a unified framework supporting
all instances of LSM. and 2) Training subsystem, which facilitates efficient
training by incorporating various advanced parallelism technologies,
particularly Sequence Parallelism designed for Linear-MoE models. Additionally,
we explore hybrid models that combine Linear-MoE layers with standard
Transformer-MoE layers with its Sequence Parallelism to further enhance model
flexibility and performance. Evaluations on two model series, A0.3B-2B and
A1B-7B, demonstrate Linear-MoE achieves efficiency gains while maintaining
competitive performance on various benchmarks, showcasing its potential as a
next-generation foundational model architecture. Code:
https://github.com/OpenSparseLLMs/Linear-MoE.