ChatPaper.aiChatPaper

BootPIG:在预训练扩散模型中引入零样本个性化图像生成能力

BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models

January 25, 2024
作者: Senthil Purushwalkam, Akash Gokul, Shafiq Joty, Nikhil Naik
cs.AI

摘要

最近的文本到图像生成模型展示了在生成图像方面取得的令人难以置信的成功,这些图像忠实地遵循输入提示。然而,使用词语来描述所需概念的要求对生成概念的外观控制有限。在这项工作中,我们通过提出一种方法来赋予现有文本到图像扩散模型个性化能力来解决这一不足。我们提出了一种新颖的架构(BootPIG),允许用户提供对象的参考图像,以引导生成图像中概念的外观。 所提出的BootPIG架构对预训练的文本到图像扩散模型进行了最小修改,并利用一个独立的UNet模型来引导生成朝向期望的外观。我们引入了一种训练过程,使我们能够利用从预训练文本到图像模型、LLM聊天代理和图像分割模型生成的数据来在BootPIG架构中引导个性化能力的启动。与需要数天预训练的现有方法相比,BootPIG架构可以在大约1小时内训练。在DreamBooth数据集上的实验表明,BootPIG在超越现有零样本方法的同时,与测试时微调方法相当。通过用户研究,我们验证了BootPIG生成相对于现有方法的偏好,无论是在保持忠实于参考对象外观还是与文本提示对齐方面。
English
Recent text-to-image generation models have demonstrated incredible success in generating images that faithfully follow input prompts. However, the requirement of using words to describe a desired concept provides limited control over the appearance of the generated concepts. In this work, we address this shortcoming by proposing an approach to enable personalization capabilities in existing text-to-image diffusion models. We propose a novel architecture (BootPIG) that allows a user to provide reference images of an object in order to guide the appearance of a concept in the generated images. The proposed BootPIG architecture makes minimal modifications to a pretrained text-to-image diffusion model and utilizes a separate UNet model to steer the generations toward the desired appearance. We introduce a training procedure that allows us to bootstrap personalization capabilities in the BootPIG architecture using data generated from pretrained text-to-image models, LLM chat agents, and image segmentation models. In contrast to existing methods that require several days of pretraining, the BootPIG architecture can be trained in approximately 1 hour. Experiments on the DreamBooth dataset demonstrate that BootPIG outperforms existing zero-shot methods while being comparable with test-time finetuning approaches. Through a user study, we validate the preference for BootPIG generations over existing methods both in maintaining fidelity to the reference object's appearance and aligning with textual prompts.
PDF141December 15, 2024