机构典籍1.0:源自哈佛图书馆馆藏的2420亿词元数据集,经优化确保精确性与可用性
Institutional Books 1.0: A 242B token dataset from Harvard Library's collections, refined for accuracy and usability
June 10, 2025
作者: Matteo Cargnelutti, Catherine Brobston, John Hess, Jack Cushman, Kristi Mukk, Aristana Scourtas, Kyle Courtney, Greg Leppert, Amanda Watson, Martha Whitehead, Jonathan Zittrain
cs.AI
摘要
大型语言模型(LLMs)通过数据学习世界知识,以生成有意义的关联与预测。因此,用于训练这些模型或在推理阶段支持其工作的数据集的性质、规模、质量及多样性,直接影响着模型的质量。随着不同质量LLMs的快速开发与广泛应用,高质量公开训练数据的稀缺性愈发凸显,亟需将这些数据的管理建立在具有清晰来源链的可持续实践基础之上。为此,本技术报告介绍了“机构图书1.0”,这是一个庞大的公共领域图书集合,最初通过哈佛图书馆自2006年起参与的谷歌图书项目进行数字化。我们与哈佛图书馆合作,提取、分析并处理这些卷册,构建了一个详尽记录的历史文本数据集。此分析涵盖了哈佛图书馆作为该项目一部分扫描的全部馆藏,最初包括1,075,899卷,涉及超过250种语言,总计约2500亿个词元。作为此次初始发布的一部分,我们公开了983,004卷(即2420亿词元)被认定为公共领域的图书的OCR提取文本(原始及后处理版本)以及元数据(书目、来源及生成信息)。本报告阐述了该项目的目标与方法,以及我们所执行分析的结果,旨在使这一历史收藏更易于访问,便于人类与机器筛选、阅读及使用。
English
Large language models (LLMs) use data to learn about the world in order to
produce meaningful correlations and predictions. As such, the nature, scale,
quality, and diversity of the datasets used to train these models, or to
support their work at inference time, have a direct impact on their quality.
The rapid development and adoption of LLMs of varying quality has brought into
focus the scarcity of publicly available, high-quality training data and
revealed an urgent need to ground the stewardship of these datasets in
sustainable practices with clear provenance chains. To that end, this technical
report introduces Institutional Books 1.0, a large collection of public domain
books originally digitized through Harvard Library's participation in the
Google Books project, beginning in 2006. Working with Harvard Library, we
extracted, analyzed, and processed these volumes into an extensively-documented
dataset of historic texts. This analysis covers the entirety of Harvard
Library's collection scanned as part of that project, originally spanning
1,075,899 volumes written in over 250 different languages for a total of
approximately 250 billion tokens. As part of this initial release, the
OCR-extracted text (original and post-processed) as well as the metadata
(bibliographic, source, and generated) of the 983,004 volumes, or 242B tokens,
identified as being in the public domain have been made available. This report
describes this project's goals and methods as well as the results of the
analyses we performed, all in service of making this historical collection more
accessible and easier for humans and machines alike to filter, read and use.