ChatPaper.aiChatPaper

MINT-1T:将开源多模态数据扩展10倍:一个包含万亿标记的多模态数据集

MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens

June 17, 2024
作者: Anas Awadalla, Le Xue, Oscar Lo, Manli Shu, Hannah Lee, Etash Kumar Guha, Matt Jordan, Sheng Shen, Mohamed Awadalla, Silvio Savarese, Caiming Xiong, Ran Xu, Yejin Choi, Ludwig Schmidt
cs.AI

摘要

多模交错数据集对于训练前沿大型多模型(LMMs)至关重要,其中包含自由形式交错的图像和文本序列。尽管开源LMMs迅速发展,但大规模、多样化的开源多模交错数据集仍然极度稀缺。为此,我们介绍了迄今为止规模最大、最多样化的开源多模交错数据集MINT-1T。MINT-1T包含一万亿文本标记和三十亿图像,是现有开源数据集的10倍规模。此外,我们还包括了之前未开发的来源,如PDF和ArXiv论文。由于扩展多模交错数据集需要大量工程工作,分享数据整理过程并发布数据集将极大地造福社区。我们的实验表明,在MINT-1T上训练的LMMs与之前领先数据集OBELICS上训练的模型性能相媲美。我们的数据和代码将在https://github.com/mlfoundations/MINT-1T上发布。
English
Multimodal interleaved datasets featuring free-form interleaved sequences of images and text are crucial for training frontier large multimodal models (LMMs). Despite the rapid progression of open-source LMMs, there remains a pronounced scarcity of large-scale, diverse open-source multimodal interleaved datasets. In response, we introduce MINT-1T, the most extensive and diverse open-source Multimodal INTerleaved dataset to date. MINT-1T comprises one trillion text tokens and three billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. As scaling multimodal interleaved datasets requires substantial engineering effort, sharing the data curation process and releasing the dataset greatly benefits the community. Our experiments show that LMMs trained on MINT-1T rival the performance of models trained on the previous leading dataset, OBELICS. Our data and code will be released at https://github.com/mlfoundations/MINT-1T.

Summary

AI-Generated Summary

PDF211December 6, 2024