主题一致且姿态多样的文本到图像生成
Subject-Consistent and Pose-Diverse Text-to-Image Generation
July 11, 2025
作者: Zhanxin Gao, Beier Zhu, Liang Yao, Jian Yang, Ying Tai
cs.AI
摘要
主题一致性生成(SCG)——旨在跨多样场景保持主体身份一致——对文本到图像(T2I)模型而言仍是一大挑战。现有的无需训练的SCG方法虽能实现一致性,却常以牺牲布局与姿态多样性为代价,限制了视觉叙事的表达力。针对此局限,我们提出了主题一致且姿态多样的T2I框架,命名为CoDi,它能够在保持多样姿态与布局的同时,实现一致的主体生成。受扩散过程渐进特性的启发,即粗粒度结构早期显现而细节后期精修,CoDi采用两阶段策略:身份迁移(IT)与身份精炼(IR)。IT作用于去噪早期阶段,利用最优传输以姿态感知的方式将身份特征传递至每幅目标图像,既促进主题一致性又保留姿态多样性。IR则应用于去噪后期,选取最显著的身份特征以进一步细化主体细节。在主题一致性、姿态多样性及提示忠实度方面的大量定性与定量结果表明,CoDi在所有指标上均实现了更优的视觉感知与更强性能。代码已发布于https://github.com/NJU-PCALab/CoDi。
English
Subject-consistent generation (SCG)-aiming to maintain a consistent subject
identity across diverse scenes-remains a challenge for text-to-image (T2I)
models. Existing training-free SCG methods often achieve consistency at the
cost of layout and pose diversity, hindering expressive visual storytelling. To
address the limitation, we propose subject-Consistent and pose-Diverse T2I
framework, dubbed as CoDi, that enables consistent subject generation with
diverse pose and layout. Motivated by the progressive nature of diffusion,
where coarse structures emerge early and fine details are refined later, CoDi
adopts a two-stage strategy: Identity Transport (IT) and Identity Refinement
(IR). IT operates in the early denoising steps, using optimal transport to
transfer identity features to each target image in a pose-aware manner. This
promotes subject consistency while preserving pose diversity. IR is applied in
the later denoising steps, selecting the most salient identity features to
further refine subject details. Extensive qualitative and quantitative results
on subject consistency, pose diversity, and prompt fidelity demonstrate that
CoDi achieves both better visual perception and stronger performance across all
metrics. The code is provided in https://github.com/NJU-PCALab/CoDi.