ChatPaper.aiChatPaper

利用大型語言模型生成私密合成文本

Harnessing large-language models to generate private synthetic text

June 2, 2023
作者: Alexey Kurakin, Natalia Ponomareva, Umar Syed, Liam MacDermed, Andreas Terzis
cs.AI

摘要

差異隱私(DP)訓練方法,如DP-SGD,可以通過確保機器學習模型不會洩露私人信息來保護敏感訓練數據。本文研究的另一種方法是使用敏感數據集生成新的合成數據集,該數據集相對於原始數據是具有差異隱私性的。這樣做有幾個優點:合成數據可以用於其他任務(包括超參數調整),可以無限期保留,或與第三方共享而無需犧牲隱私。 然而,獲取差異隱私數據比在訓練過程中引入差異隱私要困難得多。為了使其對文本可行,最近的研究利用公共數據,從預訓練的生成式語言模型開始,並在敏感數據上進行私人微調。這個模型可以用於抽樣差異隱私合成數據集。儘管這種策略看似簡單,但實施起來卻存在問題。先前的方法要麼顯示出顯著的性能損失,要麼像我們展示的那樣存在關鍵的設計缺陷。 在本文中,我們展示了一個適當的訓練目標以及調整較少的參數將產生出色的差異隱私合成數據質量。我們的方法在下游任務的性能方面與直接進行差異隱私訓練的下游分類器相媲美。我們還展示了我們的差異隱私合成數據不僅對下游分類器訓練有用,還可用於調整這些相同的模型。
English
Differentially private (DP) training methods like DP-SGD can protect sensitive training data by ensuring that ML models will not reveal private information. An alternative approach, which this paper studies, is to use a sensitive dataset to generate a new synthetic dataset which is differentially private with respect to the original data. Doing so has several advantages: synthetic data can be reused for other tasks (including for hyper parameter tuning), retained indefinitely, or shared with third parties without sacrificing privacy. However, obtaining DP data is much harder than introducing DP during training. To make it feasible for text, recent work has utilized public data by starting with a pre-trained generative language model and privately finetuning it on sensitive data. This model can be used to sample a DP synthetic dataset. While this strategy seems straightforward, executing it has proven problematic. Previous approaches either show significant performance loss, or have, as we show, critical design flaws. In this paper we demonstrate that a proper training objective along with tuning fewer parameters results in excellent DP synthetic data quality. Our approach is competitive with direct DP-training of downstream classifiers in terms of performance on downstream tasks. We also demonstrate that our DP synthetic data is not only useful for downstream classifier training, but also to tune those same models.
PDF30December 15, 2024