ChatPaper.aiChatPaper

通過提示微調控制從大型語言模型中提取記憶數據

Controlling the Extraction of Memorized Data from Large Language Models via Prompt-Tuning

May 19, 2023
作者: Mustafa Safa Ozdayi, Charith Peris, Jack FitzGerald, Christophe Dupuy, Jimit Majmudar, Haidar Khan, Rahil Parikh, Rahul Gupta
cs.AI

摘要

大型語言模型(LLMs)被知道會記憶其訓練數據的重要部分。已經證明可以通過簡單查詢模型來提取這些記憶內容的部分,這構成了一種隱私風險。我們提出了一種新方法,使用提示調整來控制LLMs中記憶內容的提取率。我們提出了兩種提示訓練策略來增加和減少提取率,分別對應攻擊和防禦。我們通過在公共基準上使用GPT-Neo系列模型展示了我們技術的有效性。對於擁有13億參數的GPT-Neo模型,我們的攻擊相對於基準線使提取率增加了9.3個百分點。我們的防禦可以通過用戶指定的超參數進行調整,以實現不同的隱私-效用折衷。相對於基準線,我們實現了高達97.7%的提取率降低,同時困惑度增加了16.9%。
English
Large Language Models (LLMs) are known to memorize significant portions of their training data. Parts of this memorized content have been shown to be extractable by simply querying the model, which poses a privacy risk. We present a novel approach which uses prompt-tuning to control the extraction rates of memorized content in LLMs. We present two prompt training strategies to increase and decrease extraction rates, which correspond to an attack and a defense, respectively. We demonstrate the effectiveness of our techniques by using models from the GPT-Neo family on a public benchmark. For the 1.3B parameter GPT-Neo model, our attack yields a 9.3 percentage point increase in extraction rate compared to our baseline. Our defense can be tuned to achieve different privacy-utility trade-offs by a user-specified hyperparameter. We achieve an extraction rate reduction of up to 97.7% relative to our baseline, with a perplexity increase of 16.9%.
PDF20December 15, 2024