遗忘还是保留?面向大型语言模型的实用知识遗忘
To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models
July 2, 2024
作者: Bozhong Tian, Xiaozhuan Liang, Siyuan Cheng, Qingbin Liu, Mengru Wang, Dianbo Sui, Xi Chen, Huajun Chen, Ningyu Zhang
cs.AI
摘要
在广泛语料库上训练的大型语言模型(LLMs)不可避免地会保留敏感数据,如个人隐私信息和受版权保护的内容。最近在知识遗忘方面取得的进展涉及更新LLM参数以消除特定知识。然而,当前的遗忘范式常常陷入模糊的遗忘边界,经常会不加选择地擦除知识。在这项工作中,我们引入了KnowUnDo,一个基准测试集,其中包含受版权保护的内容和用户隐私领域,以评估遗忘过程是否无意中擦除了重要知识。我们的研究结果表明,现有的遗忘方法往往存在过度遗忘的问题。为了解决这一问题,我们提出了一种简单而有效的方法,MemFlex,它利用梯度信息精确地定位和遗忘敏感参数。实验结果显示,MemFlex在LLMs的精确知识遗忘和通用知识保留方面优于现有方法。代码和数据集将在https://github.com/zjunlp/KnowUnDo 上发布。
English
Large Language Models (LLMs) trained on extensive corpora inevitably retain
sensitive data, such as personal privacy information and copyrighted material.
Recent advancements in knowledge unlearning involve updating LLM parameters to
erase specific knowledge. However, current unlearning paradigms are mired in
vague forgetting boundaries, often erasing knowledge indiscriminately. In this
work, we introduce KnowUnDo, a benchmark containing copyrighted content and
user privacy domains to evaluate if the unlearning process inadvertently erases
essential knowledge. Our findings indicate that existing unlearning methods
often suffer from excessive unlearning. To address this, we propose a simple
yet effective method, MemFlex, which utilizes gradient information to precisely
target and unlearn sensitive parameters. Experimental results show that MemFlex
is superior to existing methods in both precise knowledge unlearning and
general knowledge retaining of LLMs. Code and dataset will be released at
https://github.com/zjunlp/KnowUnDo.Summary
AI-Generated Summary