CLIP2Protect:利用文本引导的化妆通过对抗潜在搜索保护面部隐私
CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search
June 16, 2023
作者: Fahad Shamshad, Muzammal Naseer, Karthik Nandakumar
cs.AI
摘要
基于深度学习的人脸识别系统取得了成功,但由于其能够在数字世界中实现未经授权的用户跟踪,引发了严重的隐私问题。现有的增强隐私的方法未能生成能保护面部隐私且不影响用户体验的自然图像。我们提出了一种新颖的面部隐私保护的两步方法,依赖于在预训练生成模型的低维流形中找到对抗性潜在编码。第一步将给定的面部图像反转到潜在空间,并微调生成模型,以实现从其潜在编码准确重建给定图像。这一步产生了一个良好的初始化,有助于生成类似给定身份的高质量面部。随后,使用用户定义的化妆文本提示和保持身份的正则化来引导在潜在空间中寻找对抗性编码。大量实验证明,我们方法生成的面部具有更强的黑盒可迁移性,在面部验证任务中绝对增益达到12.06%,超过了最先进的面部隐私保护方法。最后,我们展示了所提方法在商业人脸识别系统中的有效性。我们的代码可在https://github.com/fahadshamshad/Clip2Protect 获取。
English
The success of deep learning based face recognition systems has given rise to
serious privacy concerns due to their ability to enable unauthorized tracking
of users in the digital world. Existing methods for enhancing privacy fail to
generate naturalistic images that can protect facial privacy without
compromising user experience. We propose a novel two-step approach for facial
privacy protection that relies on finding adversarial latent codes in the
low-dimensional manifold of a pretrained generative model. The first step
inverts the given face image into the latent space and finetunes the generative
model to achieve an accurate reconstruction of the given image from its latent
code. This step produces a good initialization, aiding the generation of
high-quality faces that resemble the given identity. Subsequently, user-defined
makeup text prompts and identity-preserving regularization are used to guide
the search for adversarial codes in the latent space. Extensive experiments
demonstrate that faces generated by our approach have stronger black-box
transferability with an absolute gain of 12.06% over the state-of-the-art
facial privacy protection approach under the face verification task. Finally,
we demonstrate the effectiveness of the proposed approach for commercial face
recognition systems. Our code is available at
https://github.com/fahadshamshad/Clip2Protect.