mOSCAR:一个大规模多语言和多模态的文档级语料库
mOSCAR: A Large-scale Multilingual and Multimodal Document-level Corpus
June 13, 2024
作者: Matthieu Futeral, Armel Zebaze, Pedro Ortiz Suarez, Julien Abadji, Rémi Lacroix, Cordelia Schmid, Rachel Bawden, Benoît Sagot
cs.AI
摘要
多模态大型语言模型(mLLMs)是在大量文本图像数据上训练的。虽然大多数mLLMs仅在类似字幕的数据上进行训练,但Alayrac等人[2022]表明,此外还在交错的文本和图像序列上训练它们可以导致上下文学习能力的出现。然而,他们使用的数据集M3W并非公开,且仅为英文。已经有人尝试复现他们的结果,但发布的数据集仅限于英文。相比之下,当前的多语言和多模态数据集要么仅由类似字幕组成,要么是中等规模或完全私有数据。这限制了针对世界上其他7,000种语言的mLLM研究。因此,我们介绍了mOSCAR,据我们所知,这是第一个从网络中爬取的大规模多语言和多模态文档语料库。它涵盖163种语言,3.15亿个文档,2140亿个标记和12亿张图像。我们仔细进行了一系列的过滤和评估步骤,以确保mOSCAR具有足够的安全性、多样性和良好的质量。我们另外训练了两种类型的多语言模型来证明mOSCAR的好处:(1)一个模型在mOSCAR的子集和字幕数据上训练,(2)一个模型仅在字幕数据上训练。另外在mOSCAR上训练的模型在各种多语言图像文本任务和基准测试中显示出强大的少样本学习性能提升,验证了先前针对仅英文mLLMs的发现。
English
Multimodal Large Language Models (mLLMs) are trained on a large amount of
text-image data. While most mLLMs are trained on caption-like data only,
Alayrac et al. [2022] showed that additionally training them on interleaved
sequences of text and images can lead to the emergence of in-context learning
capabilities. However, the dataset they used, M3W, is not public and is only in
English. There have been attempts to reproduce their results but the released
datasets are English-only. In contrast, current multilingual and multimodal
datasets are either composed of caption-like only or medium-scale or fully
private data. This limits mLLM research for the 7,000 other languages spoken in
the world. We therefore introduce mOSCAR, to the best of our knowledge the
first large-scale multilingual and multimodal document corpus crawled from the
web. It covers 163 languages, 315M documents, 214B tokens and 1.2B images. We
carefully conduct a set of filtering and evaluation steps to make sure mOSCAR
is sufficiently safe, diverse and of good quality. We additionally train two
types of multilingual model to prove the benefits of mOSCAR: (1) a model
trained on a subset of mOSCAR and captioning data and (2) a model train on
captioning data only. The model additionally trained on mOSCAR shows a strong
boost in few-shot learning performance across various multilingual image-text
tasks and benchmarks, confirming previous findings for English-only mLLMs.