大语言模型中的记忆现象:机制、度量与缓解策略
The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation
July 8, 2025
作者: Alexander Xiong, Xuandong Zhao, Aneesh Pappu, Dawn Song
cs.AI
摘要
大型语言模型(LLMs)在广泛任务中展现了卓越的能力,但也表现出对其训练数据的记忆现象。这一现象引发了关于模型行为、隐私风险以及学习与记忆之间界限的关键问题。针对这些担忧,本文综合了近期研究,探讨了记忆现象的现状、影响因素及其检测与缓解方法。我们深入分析了训练数据重复、训练动态和微调程序等关键驱动因素如何影响数据记忆。此外,我们评估了基于前缀的提取、成员推断和对抗性提示等方法在检测和量化记忆内容方面的有效性。除了技术分析,我们还探讨了记忆现象的广泛影响,包括法律和伦理层面的考量。最后,我们讨论了缓解策略,如数据清洗、差分隐私和训练后遗忘,同时强调了在减少有害记忆与保持模型效用之间平衡的开放挑战。本文从技术、隐私和性能三个维度,全面概述了当前关于LLM记忆的研究现状,并指出了未来研究的关键方向。
English
Large Language Models (LLMs) have demonstrated remarkable capabilities across
a wide range of tasks, yet they also exhibit memorization of their training
data. This phenomenon raises critical questions about model behavior, privacy
risks, and the boundary between learning and memorization. Addressing these
concerns, this paper synthesizes recent studies and investigates the landscape
of memorization, the factors influencing it, and methods for its detection and
mitigation. We explore key drivers, including training data duplication,
training dynamics, and fine-tuning procedures that influence data memorization.
In addition, we examine methodologies such as prefix-based extraction,
membership inference, and adversarial prompting, assessing their effectiveness
in detecting and measuring memorized content. Beyond technical analysis, we
also explore the broader implications of memorization, including the legal and
ethical implications. Finally, we discuss mitigation strategies, including data
cleaning, differential privacy, and post-training unlearning, while
highlighting open challenges in balancing the minimization of harmful
memorization with utility. This paper provides a comprehensive overview of the
current state of research on LLM memorization across technical, privacy, and
performance dimensions, identifying critical directions for future work.