CLIP2Protect:使用文本引導的化妝保護面部隱私通過對抗潛在搜索
CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search
June 16, 2023
作者: Fahad Shamshad, Muzammal Naseer, Karthik Nandakumar
cs.AI
摘要
基於深度學習的人臉識別系統取得成功,卻引發了嚴重的隱私擔憂,因為這些系統能夠在數字世界中啟用未經授權的使用者追蹤。現有的增強隱私方法無法生成能夠保護面部隱私而不影響使用者體驗的自然主義圖像。我們提出了一種新穎的兩步方法來保護面部隱私,該方法依賴於在預訓練生成模型的低維流形中尋找對抗潛在碼。第一步將給定的人臉圖像反轉為潛在空間,並微調生成模型以實現從其潛在碼準確重建給定圖像。這一步產生了一個良好的初始化,有助於生成與給定身份相似的高質量面部。隨後,使用者定義的妝容文本提示和保持身份的正則化被用來引導在潛在空間中尋找對抗碼。廣泛的實驗表明,我們方法生成的面部具有更強的黑盒可轉移性,在面部驗證任務中絕對增益為12.06%,超過了最先進的面部隱私保護方法。最後,我們展示了所提方法對商業人臉識別系統的有效性。我們的程式碼可在https://github.com/fahadshamshad/Clip2Protect 找到。
English
The success of deep learning based face recognition systems has given rise to
serious privacy concerns due to their ability to enable unauthorized tracking
of users in the digital world. Existing methods for enhancing privacy fail to
generate naturalistic images that can protect facial privacy without
compromising user experience. We propose a novel two-step approach for facial
privacy protection that relies on finding adversarial latent codes in the
low-dimensional manifold of a pretrained generative model. The first step
inverts the given face image into the latent space and finetunes the generative
model to achieve an accurate reconstruction of the given image from its latent
code. This step produces a good initialization, aiding the generation of
high-quality faces that resemble the given identity. Subsequently, user-defined
makeup text prompts and identity-preserving regularization are used to guide
the search for adversarial codes in the latent space. Extensive experiments
demonstrate that faces generated by our approach have stronger black-box
transferability with an absolute gain of 12.06% over the state-of-the-art
facial privacy protection approach under the face verification task. Finally,
we demonstrate the effectiveness of the proposed approach for commercial face
recognition systems. Our code is available at
https://github.com/fahadshamshad/Clip2Protect.