主题扩散:无需测试时微调的开放领域个性化文本到图像生成
Subject-Diffusion:Open Domain Personalized Text-to-Image Generation without Test-time Fine-tuning
July 21, 2023
作者: Jian Ma, Junhao Liang, Chen Chen, Haonan Lu
cs.AI
摘要
最近,在使用扩散模型进行个性化图像生成方面取得了显著进展。然而,在开放领域和非微调个性化图像生成领域的发展进展缓慢。在本文中,我们提出了Subject-Diffusion,这是一种新颖的开放领域个性化图像生成模型,除了不需要测试时微调外,还只需要一张参考图像即可支持在任何领域生成单个或多个主体的个性化图像。首先,我们构建了一个自动数据标注工具,并使用LAION-Aesthetics数据集构建了一个包含7600万图像及其相应主体检测边界框、分割蒙版和文本描述的大规模数据集。其次,我们设计了一个新的统一框架,通过结合文本和图像语义,将粗略位置和细粒度参考图像控制结合起来,以最大化主体的忠实度和泛化能力。此外,我们还采用了注意力控制机制来支持多主体生成。广泛的定性和定量结果表明,我们的方法在单个、多个和人类定制图像生成方面优于其他最先进的框架。请参阅我们的项目页面:https://oppo-mente-lab.github.io/subject_diffusion/
English
Recent progress in personalized image generation using diffusion models has
been significant. However, development in the area of open-domain and
non-fine-tuning personalized image generation is proceeding rather slowly. In
this paper, we propose Subject-Diffusion, a novel open-domain personalized
image generation model that, in addition to not requiring test-time
fine-tuning, also only requires a single reference image to support
personalized generation of single- or multi-subject in any domain. Firstly, we
construct an automatic data labeling tool and use the LAION-Aesthetics dataset
to construct a large-scale dataset consisting of 76M images and their
corresponding subject detection bounding boxes, segmentation masks and text
descriptions. Secondly, we design a new unified framework that combines text
and image semantics by incorporating coarse location and fine-grained reference
image control to maximize subject fidelity and generalization. Furthermore, we
also adopt an attention control mechanism to support multi-subject generation.
Extensive qualitative and quantitative results demonstrate that our method
outperforms other SOTA frameworks in single, multiple, and human customized
image generation. Please refer to our
https://oppo-mente-lab.github.io/subject_diffusion/{project page}