ChatPaper.aiChatPaper

UMO:通過匹配獎勵擴展多身份一致性以實現圖像定制

UMO: Scaling Multi-Identity Consistency for Image Customization via Matching Reward

September 8, 2025
作者: Yufeng Cheng, Wenxu Wu, Shaojin Wu, Mengqi Huang, Fei Ding, Qian He
cs.AI

摘要

近期圖像定制技術的進步展現了廣泛的應用前景,得益於其更強大的定制能力。然而,由於人類對面部更為敏感,在保持身份一致性的同時避免多參考圖像間的身份混淆仍是一大挑戰,這限制了定制模型的身份可擴展性。為解決這一問題,我們提出了UMO,一個統一的多身份優化框架,旨在保持高保真身份保留並緩解身份混淆,同時具備可擴展性。UMO採用“多對多匹配”範式,將多身份生成重新表述為全局分配優化問題,並通過對擴散模型的強化學習,普遍釋放現有圖像定制方法的多身份一致性。為促進UMO的訓練,我們開發了一個包含合成與真實部分的多參考圖像可擴展定制數據集。此外,我們提出了一種新的度量標準來衡量身份混淆。大量實驗表明,UMO不僅顯著提升了身份一致性,還在多種圖像定制方法上減少了身份混淆,在身份保留維度上樹立了開源方法的新標杆。代碼與模型:https://github.com/bytedance/UMO
English
Recent advancements in image customization exhibit a wide range of application prospects due to stronger customization capabilities. However, since we humans are more sensitive to faces, a significant challenge remains in preserving consistent identity while avoiding identity confusion with multi-reference images, limiting the identity scalability of customization models. To address this, we present UMO, a Unified Multi-identity Optimization framework, designed to maintain high-fidelity identity preservation and alleviate identity confusion with scalability. With "multi-to-multi matching" paradigm, UMO reformulates multi-identity generation as a global assignment optimization problem and unleashes multi-identity consistency for existing image customization methods generally through reinforcement learning on diffusion models. To facilitate the training of UMO, we develop a scalable customization dataset with multi-reference images, consisting of both synthesised and real parts. Additionally, we propose a new metric to measure identity confusion. Extensive experiments demonstrate that UMO not only improves identity consistency significantly, but also reduces identity confusion on several image customization methods, setting a new state-of-the-art among open-source methods along the dimension of identity preserving. Code and model: https://github.com/bytedance/UMO
PDF272September 10, 2025