StyleDrop:任意风格的文本到图像生成
StyleDrop: Text-to-Image Generation in Any Style
June 1, 2023
作者: Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, Yuan Hao, Irfan Essa, Michael Rubinstein, Dilip Krishnan
cs.AI
摘要
预训练的大型文本到图像模型利用适当的文本提示合成令人印象深刻的图像。然而,自然语言中固有的歧义和分布外效应使得合成特定设计模式、纹理或材质的图像风格变得困难。在本文中,我们介绍了StyleDrop,一种能够利用文本到图像模型合成忠实于特定风格的图像的方法。所提出的方法非常灵活,能够捕捉用户提供的风格的细微差别和细节,如配色方案、明暗、设计模式以及局部和全局效果。它通过微调极少量的可训练参数(不到总模型参数的1%)并通过迭代训练结合人工或自动反馈来提高质量,高效地学习新风格。更棒的是,即使用户只提供了一个指定所需风格的单个图像,StyleDrop也能够提供令人印象深刻的结果。广泛的研究表明,在风格调整文本到图像模型的任务中,StyleDrop在Muse上的实现明显优于其他方法,包括DreamBooth以及在Imagen或Stable Diffusion上的文本反演。更多结果请访问我们的项目网站:https://styledrop.github.io
English
Pre-trained large text-to-image models synthesize impressive images with an
appropriate use of text prompts. However, ambiguities inherent in natural
language and out-of-distribution effects make it hard to synthesize image
styles, that leverage a specific design pattern, texture or material. In this
paper, we introduce StyleDrop, a method that enables the synthesis of images
that faithfully follow a specific style using a text-to-image model. The
proposed method is extremely versatile and captures nuances and details of a
user-provided style, such as color schemes, shading, design patterns, and local
and global effects. It efficiently learns a new style by fine-tuning very few
trainable parameters (less than 1% of total model parameters) and improving
the quality via iterative training with either human or automated feedback.
Better yet, StyleDrop is able to deliver impressive results even when the user
supplies only a single image that specifies the desired style. An extensive
study shows that, for the task of style tuning text-to-image models, StyleDrop
implemented on Muse convincingly outperforms other methods, including
DreamBooth and textual inversion on Imagen or Stable Diffusion. More results
are available at our project website: https://styledrop.github.io