ChatPaper.aiChatPaper

StyleAdapter:一種用於風格化圖像生成的單過程無LoRA模型

StyleAdapter: A Single-Pass LoRA-Free Model for Stylized Image Generation

September 4, 2023
作者: Zhouxia Wang, Xintao Wang, Liangbin Xie, Zhongang Qi, Ying Shan, Wenping Wang, Ping Luo
cs.AI

摘要

本文提出了一種無需 LoRA 的風格化圖像生成方法,該方法以文本提示和風格參考圖像作為輸入,並在單次通過中生成輸出圖像。與現有方法依賴為每種風格訓練單獨的 LoRA 不同,我們的方法可以適應各種風格,並使用統一模型。然而,這帶來了兩個挑戰:1)提示失去了對生成內容的可控性,2)輸出圖像繼承了風格參考圖像的語義和風格特徵,損害了其內容的忠實度。為了應對這些挑戰,我們引入了 StyleAdapter,一個包含兩個組件的模型:雙路徑交叉注意力模塊(TPCA)和三種解耦策略。這些組件使我們的模型能夠分別處理提示和風格參考特徵,並減少風格參考中語義和風格信息之間的強耦合。StyleAdapter 能夠生成符合提示內容並採用參考風格(即使是未見過的風格)的高質量圖像,在單次通過中更靈活和高效,比以前的方法更靈活和高效。實驗已經進行,以證明我們的方法優於先前的作品。
English
This paper presents a LoRA-free method for stylized image generation that takes a text prompt and style reference images as inputs and produces an output image in a single pass. Unlike existing methods that rely on training a separate LoRA for each style, our method can adapt to various styles with a unified model. However, this poses two challenges: 1) the prompt loses controllability over the generated content, and 2) the output image inherits both the semantic and style features of the style reference image, compromising its content fidelity. To address these challenges, we introduce StyleAdapter, a model that comprises two components: a two-path cross-attention module (TPCA) and three decoupling strategies. These components enable our model to process the prompt and style reference features separately and reduce the strong coupling between the semantic and style information in the style references. StyleAdapter can generate high-quality images that match the content of the prompts and adopt the style of the references (even for unseen styles) in a single pass, which is more flexible and efficient than previous methods. Experiments have been conducted to demonstrate the superiority of our method over previous works.
PDF121December 15, 2024