ChatPaper.aiChatPaper

StyleDrop:以任何風格生成文字到圖像

StyleDrop: Text-to-Image Generation in Any Style

June 1, 2023
作者: Kihyuk Sohn, Nataniel Ruiz, Kimin Lee, Daniel Castro Chin, Irina Blok, Huiwen Chang, Jarred Barber, Lu Jiang, Glenn Entis, Yuanzhen Li, Yuan Hao, Irfan Essa, Michael Rubinstein, Dilip Krishnan
cs.AI

摘要

預先訓練的大型文本到圖像模型能夠運用適當的文本提示合成令人印象深刻的圖像。然而,自然語言中固有的歧義性和分布外效應使得合成特定設計模式、紋理或材料風格的圖像變得困難。本文介紹了一種名為StyleDrop的方法,該方法能夠利用文本到圖像模型合成嚴格遵循特定風格的圖像。所提出的方法非常靈活,能夠捕捉用戶提供的風格的細微差異和細節,如色彩方案、陰影、設計模式以及局部和全局效應。它通過微調極少量可訓練參數(不到總模型參數的1%)來高效學習新風格,並通過與人工或自動反饋的迭代訓練來提高質量。更棒的是,即使用戶只提供一張指定所需風格的單張圖像,StyleDrop也能呈現令人印象深刻的結果。一項廣泛的研究表明,在風格調整文本到圖像模型的任務中,基於Muse實現的StyleDrop明顯優於其他方法,包括DreamBooth以及在Imagen或Stable Diffusion上的文本反轉。更多結果可在我們的項目網站上查看:https://styledrop.github.io
English
Pre-trained large text-to-image models synthesize impressive images with an appropriate use of text prompts. However, ambiguities inherent in natural language and out-of-distribution effects make it hard to synthesize image styles, that leverage a specific design pattern, texture or material. In this paper, we introduce StyleDrop, a method that enables the synthesis of images that faithfully follow a specific style using a text-to-image model. The proposed method is extremely versatile and captures nuances and details of a user-provided style, such as color schemes, shading, design patterns, and local and global effects. It efficiently learns a new style by fine-tuning very few trainable parameters (less than 1% of total model parameters) and improving the quality via iterative training with either human or automated feedback. Better yet, StyleDrop is able to deliver impressive results even when the user supplies only a single image that specifies the desired style. An extensive study shows that, for the task of style tuning text-to-image models, StyleDrop implemented on Muse convincingly outperforms other methods, including DreamBooth and textual inversion on Imagen or Stable Diffusion. More results are available at our project website: https://styledrop.github.io
PDF73December 15, 2024