自定义编辑:使用定制扩散模型进行文本引导图像编辑
Custom-Edit: Text-Guided Image Editing with Customized Diffusion Models
May 25, 2023
作者: Jooyoung Choi, Yunjey Choi, Yunji Kim, Junho Kim, Sungroh Yoon
cs.AI
摘要
文本到图像扩散模型能够根据用户提供的文本提示生成多样且高保真度的图像。最近的研究将这些模型扩展到支持文本引导的图像编辑。虽然文本引导是用户直观的编辑界面,但往往无法确保准确传达用户所表达的概念。为了解决这个问题,我们提出了Custom-Edit,其中我们(i)使用少量参考图像定制扩散模型,然后(ii)进行文本引导编辑。我们的关键发现是,仅定制与语言相关的参数并使用增强的提示可以显著提高参考相似性,同时保持源相似性。此外,我们提供了每个定制和编辑过程的步骤。我们比较了流行的定制方法,并在各种数据集上验证了我们的发现。
English
Text-to-image diffusion models can generate diverse, high-fidelity images
based on user-provided text prompts. Recent research has extended these models
to support text-guided image editing. While text guidance is an intuitive
editing interface for users, it often fails to ensure the precise concept
conveyed by users. To address this issue, we propose Custom-Edit, in which we
(i) customize a diffusion model with a few reference images and then (ii)
perform text-guided editing. Our key discovery is that customizing only
language-relevant parameters with augmented prompts improves reference
similarity significantly while maintaining source similarity. Moreover, we
provide our recipe for each customization and editing process. We compare
popular customization methods and validate our findings on two editing methods
using various datasets.