ChatPaper.aiChatPaper

多頭專家混合模型

Multi-Head Mixture-of-Experts

April 23, 2024
作者: Xun Wu, Shaohan Huang, Wenhui Wang, Furu Wei
cs.AI

摘要

稀疏專家混合(SMoE)可擴展模型容量,而不會顯著增加訓練和推理成本,但存在以下兩個問題:(1)低專家激活,只有少量專家被激活進行優化。(2)缺乏對個別標記內多個語義概念的細粒度分析能力。我們提出了多頭專家混合(MH-MoE),採用多頭機制將每個標記分成多個子標記。這些子標記然後被分配並且並行地由多個不同的專家處理,然後無縫地重新整合回原始標記形式。多頭機制使模型能夠集體關注來自不同專家內各種表示空間的信息,同時顯著增強專家激活,從而加深上下文理解並減輕過度擬合。此外,我們的MH-MoE實施簡單,並與其他SMoE優化方法解耦,易於與其他SMoE模型集成以提高性能。在三個任務上進行的大量實驗結果:以英語為焦點的語言建模、多語言語言建模和遮罩多模態建模任務,展示了MH-MoE的有效性。
English
Sparse Mixtures of Experts (SMoE) scales model capacity without significant increases in training and inference costs, but exhibits the following two issues: (1) Low expert activation, where only a small subset of experts are activated for optimization. (2) Lacking fine-grained analytical capabilities for multiple semantic concepts within individual tokens. We propose Multi-Head Mixture-of-Experts (MH-MoE), which employs a multi-head mechanism to split each token into multiple sub-tokens. These sub-tokens are then assigned to and processed by a diverse set of experts in parallel, and seamlessly reintegrated into the original token form. The multi-head mechanism enables the model to collectively attend to information from various representation spaces within different experts, while significantly enhances expert activation, thus deepens context understanding and alleviate overfitting. Moreover, our MH-MoE is straightforward to implement and decouples from other SMoE optimization methods, making it easy to integrate with other SMoE models for enhanced performance. Extensive experimental results across three tasks: English-focused language modeling, Multi-lingual language modeling and Masked multi-modality modeling tasks, demonstrate the effectiveness of MH-MoE.

Summary

AI-Generated Summary

PDF612December 15, 2024