ChatPaper.aiChatPaper

反向个性化

Reverse Personalization

December 28, 2025
作者: Han-Wei Kung, Tuomas Varanka, Nicu Sebe
cs.AI

摘要

近期基於文本提示與人類身份條件的文本到圖像擴散模型,已能生成高度逼真的個性化面部圖像。然而,現有基於提示的方法在移除或修改身份特徵時,要么依賴預訓練模型已充分學習的主體特徵,要么需要針對特定身份進行模型微調。本研究通過分析身份特徵的生成過程,提出了一種用於面部匿名化的反向個性化框架。該方法利用條件擴散反演技術,無需文本提示即可直接操作圖像。為泛化至模型訓練數據之外的主體,我們引入了身份引導的條件分支。有別於先前缺乏面部屬性控制能力的匿名化方法,本框架支持屬性可控的匿名化。實驗表明,我們的方法在身份移除、屬性保持與圖像質量三者間達到了最先進的平衡效果。源代碼與數據已公開於 https://github.com/hanweikung/reverse-personalization。
English
Recent text-to-image diffusion models have demonstrated remarkable generation of realistic facial images conditioned on textual prompts and human identities, enabling creating personalized facial imagery. However, existing prompt-based methods for removing or modifying identity-specific features rely either on the subject being well-represented in the pre-trained model or require model fine-tuning for specific identities. In this work, we analyze the identity generation process and introduce a reverse personalization framework for face anonymization. Our approach leverages conditional diffusion inversion, allowing direct manipulation of images without using text prompts. To generalize beyond subjects in the model's training data, we incorporate an identity-guided conditioning branch. Unlike prior anonymization methods, which lack control over facial attributes, our framework supports attribute-controllable anonymization. We demonstrate that our method achieves a state-of-the-art balance between identity removal, attribute preservation, and image quality. Source code and data are available at https://github.com/hanweikung/reverse-personalization .
PDF01December 31, 2025