ChatPaper.aiChatPaper

UMO:通过匹配奖励实现多身份一致性的图像定制扩展

UMO: Scaling Multi-Identity Consistency for Image Customization via Matching Reward

September 8, 2025
作者: Yufeng Cheng, Wenxu Wu, Shaojin Wu, Mengqi Huang, Fei Ding, Qian He
cs.AI

摘要

近期图像定制技术的进步,因其强大的定制能力展现出广泛的应用前景。然而,由于人类对面部更为敏感,如何在多参考图像中保持身份一致性并避免身份混淆,成为一大挑战,这限制了定制模型在身份维度上的扩展性。为此,我们提出了UMO(统一多身份优化框架),旨在实现高保真的身份保持,并缓解身份混淆问题,同时提升扩展性。UMO通过“多对多匹配”范式,将多身份生成重构为全局分配优化问题,并通过对扩散模型的强化学习,普遍释放现有图像定制方法的多身份一致性。为支持UMO的训练,我们构建了一个包含合成与真实部分的多参考图像可扩展定制数据集。此外,我们提出了一种新的度量标准来评估身份混淆程度。大量实验表明,UMO不仅显著提升了身份一致性,还在多种图像定制方法上减少了身份混淆,在身份保持维度上树立了开源方法的新标杆。代码与模型详见:https://github.com/bytedance/UMO。
English
Recent advancements in image customization exhibit a wide range of application prospects due to stronger customization capabilities. However, since we humans are more sensitive to faces, a significant challenge remains in preserving consistent identity while avoiding identity confusion with multi-reference images, limiting the identity scalability of customization models. To address this, we present UMO, a Unified Multi-identity Optimization framework, designed to maintain high-fidelity identity preservation and alleviate identity confusion with scalability. With "multi-to-multi matching" paradigm, UMO reformulates multi-identity generation as a global assignment optimization problem and unleashes multi-identity consistency for existing image customization methods generally through reinforcement learning on diffusion models. To facilitate the training of UMO, we develop a scalable customization dataset with multi-reference images, consisting of both synthesised and real parts. Additionally, we propose a new metric to measure identity confusion. Extensive experiments demonstrate that UMO not only improves identity consistency significantly, but also reduces identity confusion on several image customization methods, setting a new state-of-the-art among open-source methods along the dimension of identity preserving. Code and model: https://github.com/bytedance/UMO
PDF272September 10, 2025