CoMat:将文本到图像扩散模型与图像到文本概念匹配对齐
CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching
April 4, 2024
作者: Dongzhi Jiang, Guanglu Song, Xiaoshi Wu, Renrui Zhang, Dazhong Shen, Zhuofan Zong, Yu Liu, Hongsheng Li
cs.AI
摘要
扩散模型在文本到图像生成领域取得了巨大成功。然而,缓解文本提示与图像之间的不对齐仍然具有挑战性。导致不对齐的根本原因尚未得到广泛调查。我们观察到,不对齐是由于令牌注意力激活不足造成的。我们进一步将这一现象归因于扩散模型的条件利用不足,这是由其训练范式引起的。为了解决这个问题,我们提出了CoMat,一种端到端的扩散模型微调策略,其中包括图像到文本概念匹配机制。我们利用图像字幕模型来衡量图像到文本的对齐情况,并引导扩散模型重新审视被忽略的令牌。还提出了一种新颖的属性集中模块来解决属性绑定问题。在没有任何图像或人类偏好数据的情况下,我们仅使用2万个文本提示来微调SDXL,获得CoMat-SDXL。大量实验证明,CoMat-SDXL在两个文本到图像对齐基准测试中明显优于基线模型SDXL,并实现了最先进的性能。
English
Diffusion models have demonstrated great success in the field of
text-to-image generation. However, alleviating the misalignment between the
text prompts and images is still challenging. The root reason behind the
misalignment has not been extensively investigated. We observe that the
misalignment is caused by inadequate token attention activation. We further
attribute this phenomenon to the diffusion model's insufficient condition
utilization, which is caused by its training paradigm. To address the issue, we
propose CoMat, an end-to-end diffusion model fine-tuning strategy with an
image-to-text concept matching mechanism. We leverage an image captioning model
to measure image-to-text alignment and guide the diffusion model to revisit
ignored tokens. A novel attribute concentration module is also proposed to
address the attribute binding problem. Without any image or human preference
data, we use only 20K text prompts to fine-tune SDXL to obtain CoMat-SDXL.
Extensive experiments show that CoMat-SDXL significantly outperforms the
baseline model SDXL in two text-to-image alignment benchmarks and achieves
start-of-the-art performance.Summary
AI-Generated Summary