ChatPaper.aiChatPaper

DataComp-LM:寻找下一代语言模型的训练集

DataComp-LM: In search of the next generation of training sets for language models

June 17, 2024
作者: Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Gadre, Hritik Bansal, Etash Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Chandu, Thao Nguyen, Igor Vasiljevic, Sham Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldani, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, Vaishaal Shankar
cs.AI

摘要

我们介绍了用于语言模型(LM)的数据比较(DataComp for Language Models,DCLM),这是一个用于受控数据集实验的测试平台,旨在改进语言模型。作为DCLM的一部分,我们提供了一个标准语料库,包括从Common Crawl提取的240T标记,基于OpenLM框架的有效预训练配方,以及一个包含53个下游评估的广泛套件。参与DCLM基准测试的参与者可以尝试不同的数据策略,如去重、过滤和数据混合,模型规模范围从412M到7B参数。作为DCLM的基准线,我们进行了大量实验,发现基于模型的过滤对于构建高质量的训练集至关重要。由此产生的数据集,DCLM-Baseline,可以从头开始训练一个7B参数的语言模型,在2.6T训练标记上实现64%的5-shot准确率。与先前的开放数据语言模型最先进技术MAP-Neo相比,DCLM-Baseline在MMLU上表示提高了6.6个百分点,同时使用的计算资源减少了40%。我们的基准模型还与Mistral-7B-v0.3和Llama 3 8B在MMLU上表现相当(63%和66%),并在53个自然语言理解任务的平均表现上与Llama 3 8B相比,使用的计算资源减少了6.6倍。我们的结果突出了数据集设计对于训练语言模型的重要性,并为进一步研究数据策划提供了一个起点。
English
We introduce DataComp for Language Models (DCLM), a testbed for controlled dataset experiments with the goal of improving language models. As part of DCLM, we provide a standardized corpus of 240T tokens extracted from Common Crawl, effective pretraining recipes based on the OpenLM framework, and a broad suite of 53 downstream evaluations. Participants in the DCLM benchmark can experiment with data curation strategies such as deduplication, filtering, and data mixing at model scales ranging from 412M to 7B parameters. As a baseline for DCLM, we conduct extensive experiments and find that model-based filtering is key to assembling a high-quality training set. The resulting dataset, DCLM-Baseline enables training a 7B parameter language model from scratch to 64% 5-shot accuracy on MMLU with 2.6T training tokens. Compared to MAP-Neo, the previous state-of-the-art in open-data language models, DCLM-Baseline represents a 6.6 percentage point improvement on MMLU while being trained with 40% less compute. Our baseline model is also comparable to Mistral-7B-v0.3 and Llama 3 8B on MMLU (63% & 66%), and performs similarly on an average of 53 natural language understanding tasks while being trained with 6.6x less compute than Llama 3 8B. Our results highlight the importance of dataset design for training language models and offer a starting point for further research on data curation.

Summary

AI-Generated Summary

PDF534December 6, 2024