Transformer混合模型:一种稀疏且可扩展的多模态基础模型架构。
Mixture-of-Transformers: A Sparse and Scalable Architecture for Multi-Modal Foundation Models
November 7, 2024
作者: Weixin Liang, Lili Yu, Liang Luo, Srinivasan Iyer, Ning Dong, Chunting Zhou, Gargi Ghosh, Mike Lewis, Wen-tau Yih, Luke Zettlemoyer, Xi Victoria Lin
cs.AI
摘要
大型语言模型(LLMs)的发展已经扩展到能够在统一框架内处理文本、图像和语音的多模态系统。与仅处理文本的LLMs相比,训练这些模型需要更大规模的数据集和计算资源。为了解决规模挑战,我们引入了一种称为变压器混合(MoT)的稀疏多模态变压器架构,可以显著降低预训练的计算成本。MoT通过模态分离模型的非嵌入参数,包括前馈网络、注意力矩阵和层归一化,实现了对完整输入序列的全局自注意力的模态特定处理。我们在多种设置和模型规模下评估了MoT。在Chameleon 7B设置(自回归文本和图像生成)中,MoT仅使用55.8\%的FLOPs即可达到与密集基线相当的性能。当扩展到包括语音时,MoT仅使用37.2\%的FLOPs即可达到与密集基线相当的语音性能。在Transfusion设置中,其中文本和图像以不同目标进行训练,7B MoT模型以三分之一的FLOPs即可达到与密集基线图像模态性能相当的水平,而760M MoT模型在关键图像生成指标上优于14亿密集基线。系统分析进一步凸显了MoT的实际优势,在AWS p4de.24xlarge实例上(搭载NVIDIA A100 GPU)的墙钟时间中,以47.2%的时间实现了密集基线图像质量,以75.6%的时间实现了文本质量。
English
The development of large language models (LLMs) has expanded to multi-modal
systems capable of processing text, images, and speech within a unified
framework. Training these models demands significantly larger datasets and
computational resources compared to text-only LLMs. To address the scaling
challenges, we introduce Mixture-of-Transformers (MoT), a sparse multi-modal
transformer architecture that significantly reduces pretraining computational
costs. MoT decouples non-embedding parameters of the model by modality --
including feed-forward networks, attention matrices, and layer normalization --
enabling modality-specific processing with global self-attention over the full
input sequence. We evaluate MoT across multiple settings and model scales. In
the Chameleon 7B setting (autoregressive text-and-image generation), MoT
matches the dense baseline's performance using only 55.8\% of the FLOPs. When
extended to include speech, MoT reaches speech performance comparable to the
dense baseline with only 37.2\% of the FLOPs. In the Transfusion setting, where
text and image are trained with different objectives, a 7B MoT model matches
the image modality performance of the dense baseline with one third of the
FLOPs, and a 760M MoT model outperforms a 1.4B dense baseline across key image
generation metrics. System profiling further highlights MoT's practical
benefits, achieving dense baseline image quality in 47.2\% of the wall-clock
time and text quality in 75.6\% of the wall-clock time (measured on AWS
p4de.24xlarge instances with NVIDIA A100 GPUs).