遗忘比较器:一种用于机器遗忘方法对比评估的可视化分析系统
Unlearning Comparator: A Visual Analytics System for Comparative Evaluation of Machine Unlearning Methods
August 18, 2025
作者: Jaeung Lee, Suhyeon Yu, Yurim Jang, Simon S. Woo, Jaemin Jo
cs.AI
摘要
机器遗忘(Machine Unlearning, MU)旨在从已训练模型中移除特定训练数据,确保这些被移除的数据不再影响模型行为,从而满足数据隐私法规中的“被遗忘权”要求。然而,我们注意到,在这一迅速兴起的领域中,研究人员在分析和理解不同MU方法的行为时面临挑战,特别是在MU的三大基本原则——准确性、效率和隐私方面。因此,他们往往依赖于聚合指标和临时性评估,难以准确权衡不同方法之间的利弊。为填补这一空白,我们引入了一个视觉分析系统——遗忘比较器(Unlearning Comparator),旨在促进MU方法的系统性评估。我们的系统支持评估过程中的两项重要任务:模型比较和攻击模拟。首先,它允许用户在类别、实例和层级等多个层面上比较两个模型的行为,例如通过特定方法生成的模型与重新训练的基线模型,以更深入地理解遗忘后发生的变化。其次,我们的系统通过模拟成员推断攻击(Membership Inference Attacks, MIAs)来评估方法的隐私性,攻击者试图判断特定数据样本是否属于原始训练集。通过案例研究,我们评估了该系统对主要MU方法的可视化分析能力,并证明它不仅帮助用户理解模型行为,还能为MU方法的改进提供深刻见解。
English
Machine Unlearning (MU) aims to remove target training data from a trained
model so that the removed data no longer influences the model's behavior,
fulfilling "right to be forgotten" obligations under data privacy laws. Yet, we
observe that researchers in this rapidly emerging field face challenges in
analyzing and understanding the behavior of different MU methods, especially in
terms of three fundamental principles in MU: accuracy, efficiency, and privacy.
Consequently, they often rely on aggregate metrics and ad-hoc evaluations,
making it difficult to accurately assess the trade-offs between methods. To
fill this gap, we introduce a visual analytics system, Unlearning Comparator,
designed to facilitate the systematic evaluation of MU methods. Our system
supports two important tasks in the evaluation process: model comparison and
attack simulation. First, it allows the user to compare the behaviors of two
models, such as a model generated by a certain method and a retrained baseline,
at class-, instance-, and layer-levels to better understand the changes made
after unlearning. Second, our system simulates membership inference attacks
(MIAs) to evaluate the privacy of a method, where an attacker attempts to
determine whether specific data samples were part of the original training set.
We evaluate our system through a case study visually analyzing prominent MU
methods and demonstrate that it helps the user not only understand model
behaviors but also gain insights that can inform the improvement of MU methods.