反向个性化
Reverse Personalization
December 28, 2025
作者: Han-Wei Kung, Tuomas Varanka, Nicu Sebe
cs.AI
摘要
近期文本到图像的扩散模型在基于文本提示和人类身份生成逼真人脸图像方面展现出卓越能力,实现了个性化面部图像的创建。然而,现有基于提示的方法在移除或修改特定身份特征时,要么依赖预训练模型已充分学习的主体特征,要么需要对特定身份进行模型微调。本研究通过分析身份特征生成过程,提出了一种面向人脸匿名化的逆向个性化框架。该方法利用条件扩散反演技术,无需文本提示即可直接操作图像。为泛化至模型训练数据之外的主体,我们引入了身份引导的条件分支。与先前缺乏面部属性控制的匿名化方法不同,本框架支持属性可控的匿名化。实验表明,我们的方法在身份消除、属性保留和图像质量方面达到了最优平衡。源代码与数据详见https://github.com/hanweikung/reverse-personalization。
English
Recent text-to-image diffusion models have demonstrated remarkable generation of realistic facial images conditioned on textual prompts and human identities, enabling creating personalized facial imagery. However, existing prompt-based methods for removing or modifying identity-specific features rely either on the subject being well-represented in the pre-trained model or require model fine-tuning for specific identities. In this work, we analyze the identity generation process and introduce a reverse personalization framework for face anonymization. Our approach leverages conditional diffusion inversion, allowing direct manipulation of images without using text prompts. To generalize beyond subjects in the model's training data, we incorporate an identity-guided conditioning branch. Unlike prior anonymization methods, which lack control over facial attributes, our framework supports attribute-controllable anonymization. We demonstrate that our method achieves a state-of-the-art balance between identity removal, attribute preservation, and image quality. Source code and data are available at https://github.com/hanweikung/reverse-personalization .