ChatPaper.aiChatPaper

FineWeb数据集:在规模上提炼出最优质的文本数据

The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale

June 25, 2024
作者: Guilherme Penedo, Hynek Kydlíček, Loubna Ben allal, Anton Lozhkov, Margaret Mitchell, Colin Raffel, Leandro Von Werra, Thomas Wolf
cs.AI

摘要

大型语言模型(LLM)的性能严重依赖于其预训练数据集的质量和大小。然而,像Llama 3和Mixtral这样的最先进的开放式LLM的预训练数据集并不公开,并且对它们的创建方式了解甚少。在这项工作中,我们介绍了FineWeb,这是一个从96个Common Crawl快照中衍生出的包含1.5万亿标记的数据集,能够产生比其他开放式预训练数据集表现更好的LLM。为了推动对如何筛选高质量预训练数据集的最佳方法的理解,我们仔细记录并剔除了FineWeb中使用的所有设计选择,包括对去重和过滤策略的深入研究。此外,我们还推出了FineWeb-Edu,这是从FineWeb中筛选出的包含1.3万亿标记的教育文本集合。在FineWeb-Edu上预训练的LLM在类似MMLU和ARC这样的知识和推理密集型基准测试中表现出明显更好的性能。除了我们的数据集外,我们还公开发布了我们的数据筛选代码库以及在我们的剔除实验期间训练的所有模型。
English
The performance of a large language model (LLM) depends heavily on the quality and size of its pretraining dataset. However, the pretraining datasets for state-of-the-art open LLMs like Llama 3 and Mixtral are not publicly available and very little is known about how they were created. In this work, we introduce FineWeb, a 15-trillion token dataset derived from 96 Common Crawl snapshots that produces better-performing LLMs than other open pretraining datasets. To advance the understanding of how best to curate high-quality pretraining datasets, we carefully document and ablate all of the design choices used in FineWeb, including in-depth investigations of deduplication and filtering strategies. In addition, we introduce FineWeb-Edu, a 1.3-trillion token collection of educational text filtered from FineWeb. LLMs pretrained on FineWeb-Edu exhibit dramatically better performance on knowledge- and reasoning-intensive benchmarks like MMLU and ARC. Along with our datasets, we publicly release our data curation codebase and all of the models trained during our ablation experiments.

Summary

AI-Generated Summary

PDF975November 29, 2024