🤗 Upvotes: 21 | cs.CL
Authors:
Shi Yu, Zhiyuan Liu, Chenyan Xiong
Title:
Craw4LLM: Efficient Web Crawling for LLM Pretraining
Arxiv:
http://arxiv.org/abs/2502.13347v1
Abstract:
Web crawl is a main source of large language models' (LLMs) pretraining data, but the majority of crawled web pages are discarded in pretraining due to low data quality. This paper presents Crawl4LLM, an efficient web crawling method that explores the web graph based on the preference of LLM pretraining. Specifically, it leverages the influence of a webpage in LLM pretraining as the priority score of the web crawler's scheduler, replacing the standard graph connectivity based priority. Our experiments on a web graph containing 900 million webpages from a commercial search engine's index demonstrate the efficiency of Crawl4LLM in obtaining high-quality pretraining data. With just 21% URLs crawled, LLMs pretrained on Crawl4LLM data reach the same downstream performances of previous crawls, significantly reducing the crawling waste and alleviating the burdens on websites. Our code is publicly available at https://github.com/cxcscmu/Crawl4LLM.
信息
- 节目
- 频率一日一更
- 发布时间2025年2月21日 UTC 04:42
- 长度23 分钟
- 单集589
- 分级儿童适宜