ChatPaper.aiChatPaper

DreamMatcher:外观匹配自注意力用于语义一致的文本到图像个性化

DreamMatcher: Appearance Matching Self-Attention for Semantically-Consistent Text-to-Image Personalization

February 15, 2024
作者: Jisu Nam, Heesu Kim, DongJae Lee, Siyoon Jin, Seungryong Kim, Seunggyu Chang
cs.AI

摘要

文本到图像(T2I)个性化的目标是将扩散模型定制为用户提供的参考概念,生成与目标提示对齐的概念多样图像。传统方法使用独特的文本嵌入来表示参考概念,往往无法准确模仿参考的外观。为解决这一问题,一个解决方案可以是将参考图像明确地纳入目标去噪过程中,即所谓的键-值替换。然而,先前的工作受限于局部编辑,因为它们会破坏预训练的T2I模型的结构路径。为了克服这一问题,我们提出了一种新颖的插件方法,名为DreamMatcher,将T2I个性化重新构想为语义匹配。具体而言,DreamMatcher通过语义匹配将目标值替换为与之对齐的参考值,同时保持结构路径不变,以保留预训练的T2I模型生成多样结构的通用能力。我们还引入了一种语义一致的遮罩策略,以隔离个性化概念与目标提示引入的无关区域。与现有的T2I模型兼容,DreamMatcher在复杂场景中显示出显著改进。深入分析证明了我们方法的有效性。
English
The objective of text-to-image (T2I) personalization is to customize a diffusion model to a user-provided reference concept, generating diverse images of the concept aligned with the target prompts. Conventional methods representing the reference concepts using unique text embeddings often fail to accurately mimic the appearance of the reference. To address this, one solution may be explicitly conditioning the reference images into the target denoising process, known as key-value replacement. However, prior works are constrained to local editing since they disrupt the structure path of the pre-trained T2I model. To overcome this, we propose a novel plug-in method, called DreamMatcher, which reformulates T2I personalization as semantic matching. Specifically, DreamMatcher replaces the target values with reference values aligned by semantic matching, while leaving the structure path unchanged to preserve the versatile capability of pre-trained T2I models for generating diverse structures. We also introduce a semantic-consistent masking strategy to isolate the personalized concept from irrelevant regions introduced by the target prompts. Compatible with existing T2I models, DreamMatcher shows significant improvements in complex scenarios. Intensive analyses demonstrate the effectiveness of our approach.

Summary

AI-Generated Summary

PDF161December 15, 2024