探索规模化全模态预训练的极限
Explore the Limits of Omni-modal Pretraining at Scale
June 13, 2024
作者: Yiyuan Zhang, Handong Li, Jing Liu, Xiangyu Yue
cs.AI
摘要
我们提出构建全模态智能,能够理解任何模态并学习通用表示。具体而言,我们提出了一种可扩展的预训练范式,名为多模态上下文(MiCo),可以在预训练过程中扩展模态数量、数据量以及模型参数。通过MiCo,预训练模型在多模态学习中展现出显著的新能力,这些能力在以下任务上进行了评估:i)对10种不同模态的单模态感知基准测试,ii)25个跨模态理解任务,包括检索、问答、字幕生成,以及iii)18个多模态大型语言模型基准测试。我们的模型建立了37项最新性能的新纪录。希望我们的研究能为全模态智能的发展做出贡献。代码和模型可在 https://github.com/invictus717/MiCo 找到。
English
We propose to build omni-modal intelligence, which is capable of
understanding any modality and learning universal representations. In specific,
we propose a scalable pretraining paradigm, named Multimodal Context (MiCo),
which can scale up the numbers of modalities and amount of data, together with
the model parameters, in the pretraining process. With MiCo, the pretrained
models show significant emergent abilities in multimodal learning, which are
evaluated on the following tasks: i) single-modality perception benchmarks of
10 different modalities, ii) 25 cross-modality understanding tasks of
retrieval, question-answering, captioning, and iii) 18 multimodal large
language model benchmarks. Our models establish 37 new records for
state-of-the-art performance. We hope that our research could contribute to the
development of omni-modal intelligence. Code and Models are at
https://github.com/invictus717/MiCo