检测大型语言模型的预训练数据
Detecting Pretraining Data from Large Language Models
October 25, 2023
作者: Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer
cs.AI
摘要
尽管大型语言模型(LLMs)被广泛应用,但用于训练它们的数据很少被披露。考虑到这些数据的规模之大,高达数万亿个标记,几乎可以肯定其中包含潜在问题文本,如受版权保护的材料、个人可识别信息以及用于广泛报道的参考基准测试数据。然而,我们目前无法知道这些类型的数据包含哪些,以及比例如何。本文研究了预训练数据检测问题:在不知道预训练数据的情况下,给定一段文本和对LLM的黑盒访问权限,我们能否确定模型是否是在提供的文本上进行训练的?为了促进这项研究,我们引入了一个动态基准WIKIMIA,使用在模型训练之前和之后创建的数据来支持金标准检测。我们还提出了一种新的检测方法Min-K% Prob,基于一个简单的假设:一个未见过的示例很可能包含一些在LLM下具有较低概率的离群词,而一个已见过的示例则不太可能包含这种低概率的词。Min-K% Prob可以在不了解预训练语料库或进行任何额外训练的情况下应用,与先前需要在类似于预训练数据的数据上训练参考模型的检测方法有所不同。此外,我们的实验表明,Min-K% Prob在WIKIMIA上比这些先前方法提高了7.4%。我们将Min-K% Prob应用于两个现实场景,即受版权保护书籍检测和受污染的下游示例检测,并发现它是一个始终有效的解决方案。
English
Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to two real-world scenarios, copyrighted book
detection, and contaminated downstream example detection, and find it a
consistently effective solution.