ChatPaper.aiChatPaper

Unilogit:基於均勻目標自蒸餾的大型語言模型穩健機器遺忘技術

Unilogit: Robust Machine Unlearning for LLMs Using Uniform-Target Self-Distillation

May 9, 2025
作者: Stefan Vasilev, Christian Herold, Baohao Liao, Seyyed Hadi Hashemi, Shahram Khadivi, Christof Monz
cs.AI

摘要

本文介紹了Unilogit,一種針對大型語言模型機器遺忘的新型自我蒸餾方法。Unilogit解決了在保持模型整體效用的同時選擇性遺忘特定信息的挑戰,這對於遵守如GDPR等數據隱私法規至關重要。與依賴靜態超參數或初始模型輸出的先前方法不同,Unilogit動態調整目標logits,以實現目標token的均勻概率,利用當前模型的輸出來獲得更精確的自我蒸餾目標。這種方法不僅消除了對額外超參數的需求,還增強了模型逼近黃金目標的能力。在公共基準和內部電子商務數據集上的廣泛實驗表明,Unilogit在平衡遺忘與保留目標方面表現優異,超越了如NPO和UnDIAL等最先進的方法。我們的分析進一步揭示了Unilogit在各種情境下的魯棒性,凸顯了其實際應用性和在實現有效機器遺忘方面的效能。
English
This paper introduces Unilogit, a novel self-distillation method for machine unlearning in Large Language Models. Unilogit addresses the challenge of selectively forgetting specific information while maintaining overall model utility, a critical task in compliance with data privacy regulations like GDPR. Unlike prior methods that rely on static hyperparameters or starting model outputs, Unilogit dynamically adjusts target logits to achieve a uniform probability for the target token, leveraging the current model's outputs for more accurate self-distillation targets. This approach not only eliminates the need for additional hyperparameters but also enhances the model's ability to approximate the golden targets. Extensive experiments on public benchmarks and an in-house e-commerce dataset demonstrate Unilogit's superior performance in balancing forget and retain objectives, outperforming state-of-the-art methods such as NPO and UnDIAL. Our analysis further reveals Unilogit's robustness across various scenarios, highlighting its practical applicability and effectiveness in achieving efficacious machine unlearning.

Summary

AI-Generated Summary

PDF142May 16, 2025