Craw4LLM:面向大語言模型預訓練的高效網路爬蟲
Craw4LLM: Efficient Web Crawling for LLM Pretraining
February 19, 2025
作者: Shi Yu, Zhiyuan Liu, Chenyan Xiong
cs.AI
摘要
網路爬取是大型語言模型(LLMs)預訓練資料的主要來源,但由於資料品質低劣,大多數爬取的網頁在預訓練過程中被捨棄。本文提出Crawl4LLM,一種基於LLM預訓練偏好探索網路圖的高效網頁爬取方法。具體而言,該方法利用網頁在LLM預訓練中的影響力作為爬取器排程器的優先級評分,取代了基於標準圖連通性的優先級。我們在包含商業搜尋引擎索引中9億個網頁的網路圖上進行的實驗,展示了Crawl4LLM在獲取高品質預訓練資料方面的效率。僅爬取21%的URL,使用Crawl4LLM資料預訓練的LLMs就能達到與先前爬取相同的下游任務表現,顯著減少了爬取浪費並減輕了對網站的負擔。我們的程式碼公開於https://github.com/cxcscmu/Crawl4LLM。
English
Web crawl is a main source of large language models' (LLMs) pretraining data,
but the majority of crawled web pages are discarded in pretraining due to low
data quality. This paper presents Crawl4LLM, an efficient web crawling method
that explores the web graph based on the preference of LLM pretraining.
Specifically, it leverages the influence of a webpage in LLM pretraining as the
priority score of the web crawler's scheduler, replacing the standard graph
connectivity based priority. Our experiments on a web graph containing 900
million webpages from a commercial search engine's index demonstrate the
efficiency of Crawl4LLM in obtaining high-quality pretraining data. With just
21% URLs crawled, LLMs pretrained on Crawl4LLM data reach the same downstream
performances of previous crawls, significantly reducing the crawling waste and
alleviating the burdens on websites. Our code is publicly available at
https://github.com/cxcscmu/Crawl4LLM.Summary
AI-Generated Summary