探索在大規模下的全模態預訓練的極限
Explore the Limits of Omni-modal Pretraining at Scale
June 13, 2024
作者: Yiyuan Zhang, Handong Li, Jing Liu, Xiangyu Yue
cs.AI
摘要
我們提議建立全模態智能,能夠理解任何模態並學習通用表示。具體而言,我們提出了一種可擴展的預訓練範式,名為多模態上下文(MiCo),該範式可以在預訓練過程中擴展模態數量和數據量,以及模型參數。通過MiCo,預訓練模型展現出在多模態學習方面的顯著新能力,這些能力在以下任務上進行評估:i)對10種不同模態的單模態感知基準測試,ii)25個跨模態理解任務,包括檢索、問答、字幕生成,以及iii)18個多模態大型語言模型基準測試。我們的模型建立了37個最新性能的新紀錄。我們希望我們的研究能為全模態智能的發展做出貢獻。代碼和模型位於https://github.com/invictus717/MiCo
English
We propose to build omni-modal intelligence, which is capable of
understanding any modality and learning universal representations. In specific,
we propose a scalable pretraining paradigm, named Multimodal Context (MiCo),
which can scale up the numbers of modalities and amount of data, together with
the model parameters, in the pretraining process. With MiCo, the pretrained
models show significant emergent abilities in multimodal learning, which are
evaluated on the following tasks: i) single-modality perception benchmarks of
10 different modalities, ii) 25 cross-modality understanding tasks of
retrieval, question-answering, captioning, and iii) 18 multimodal large
language model benchmarks. Our models establish 37 new records for
state-of-the-art performance. We hope that our research could contribute to the
development of omni-modal intelligence. Code and Models are at
https://github.com/invictus717/MiCoSummary
AI-Generated Summary