通过提示微调控制大型语言模型中记忆数据的提取
Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning
May 19, 2023
作者: Mustafa Safa Ozdayi, Charith Peris, Jack FitzGerald, Christophe Dupuy, Jimit Majmudar, Haidar Khan, Rahil Parikh, Rahul Gupta
cs.AI
摘要
大型语言模型(LLMs)以记忆其训练数据的重要部分而闻名。已经证明可以通过简单查询模型来提取其中的部分记忆内容,这构成了一种隐私风险。我们提出了一种新颖的方法,使用提示微调来控制LLMs中记忆内容的提取速率。我们提出了两种提示训练策略,分别用于增加和减少提取速率,分别对应攻击和防御。我们通过在公共基准上使用GPT-Neo系列模型展示了我们技术的有效性。对于13亿参数的GPT-Neo模型,我们的攻击相对于基线实现了提取速率的9.3个百分点增加。我们的防御可以通过用户指定的超参数来调整,以实现不同的隐私-效用权衡。相对于基线,我们实现了高达97.7%的提取速率降低,伴随着16.9%的困惑度增加。
English
Large Language Models (LLMs) are known to memorize significant portions of
their training data. Parts of this memorized content have been shown to be
extractable by simply querying the model, which poses a privacy risk. We
present a novel approach which uses prompt-tuning to control the extraction
rates of memorized content in LLMs. We present two prompt training strategies
to increase and decrease extraction rates, which correspond to an attack and a
defense, respectively. We demonstrate the effectiveness of our techniques by
using models from the GPT-Neo family on a public benchmark. For the 1.3B
parameter GPT-Neo model, our attack yields a 9.3 percentage point increase in
extraction rate compared to our baseline. Our defense can be tuned to achieve
different privacy-utility trade-offs by a user-specified hyperparameter. We
achieve an extraction rate reduction of up to 97.7% relative to our baseline,
with a perplexity increase of 16.9%.