MINT-1T:將開源多模態數據擴展10倍:具有一兆標記的多模態數據集
MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens
June 17, 2024
作者: Anas Awadalla, Le Xue, Oscar Lo, Manli Shu, Hannah Lee, Etash Kumar Guha, Matt Jordan, Sheng Shen, Mohamed Awadalla, Silvio Savarese, Caiming Xiong, Ran Xu, Yejin Choi, Ludwig Schmidt
cs.AI
摘要
多模交錯數據集,其中包含自由形式的圖像和文本交錯序列,對於訓練前沿大型多模型(LMMs)至關重要。儘管開源LMMs快速發展,但大規模、多樣化的開源多模交錯數據集仍然非常稀缺。為此,我們推出了MINT-1T,迄今為止最廣泛、最多樣化的開源多模交錯數據集。MINT-1T包含一兆文本標記和三十億圖像,比現有開源數據集擴大了10倍。此外,我們還包括以前未開發的來源,如PDF和ArXiv論文。由於擴展多模交錯數據集需要大量工程努力,分享數據整理過程並釋放數據集將極大地惠及社區。我們的實驗表明,在MINT-1T上訓練的LMMs與以前領先數據集OBELICS上訓練的模型性能相媲美。我們的數據和代碼將在https://github.com/mlfoundations/MINT-1T上發布。
English
Multimodal interleaved datasets featuring free-form interleaved sequences of
images and text are crucial for training frontier large multimodal models
(LMMs). Despite the rapid progression of open-source LMMs, there remains a
pronounced scarcity of large-scale, diverse open-source multimodal interleaved
datasets. In response, we introduce MINT-1T, the most extensive and diverse
open-source Multimodal INTerleaved dataset to date. MINT-1T comprises one
trillion text tokens and three billion images, a 10x scale-up from existing
open-source datasets. Additionally, we include previously untapped sources such
as PDFs and ArXiv papers. As scaling multimodal interleaved datasets requires
substantial engineering effort, sharing the data curation process and releasing
the dataset greatly benefits the community. Our experiments show that LMMs
trained on MINT-1T rival the performance of models trained on the previous
leading dataset, OBELICS. Our data and code will be released at
https://github.com/mlfoundations/MINT-1T.Summary
AI-Generated Summary