SUR-adapter:利用大型語言模型增強文本到圖像預訓練擴散模型
SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models
May 9, 2023
作者: Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin
cs.AI
摘要
擴散模型已成為流行的文本到圖像生成模型,可以根據文本提示生成高質量且內容豐富的圖像。然而,在現有模型中,當輸入提示為簡潔敘事時,存在對語義理解和常識推理的限制,導致圖像生成質量低下。為了改進對敘事提示的處理能力,我們提出了一種簡單而有效的參數節省的微調方法,稱為語義理解和推理適配器(SUR-adapter)用於預訓練的擴散模型。為了實現這一目標,我們首先收集並標註了一個新數據集SURD,其中包含超過57,000個語義校正的多模樣本。每個樣本包含一個簡單的敘事提示、一個基於關鍵詞的複雜提示和一幅高質量圖像。然後,我們將敘事提示的語義表示與複雜提示進行對齊,並通過知識蒸餾將大型語言模型(LLMs)的知識轉移到我們的SUR-adapter,從而使其獲得強大的語義理解和推理能力,以建立高質量的文本語義表示,用於文本到圖像生成。我們通過集成多個LLMs和流行的預訓練擴散模型進行實驗,展示了我們方法的有效性,使擴散模型能夠理解和推理簡潔的自然語言而不會降低圖像質量。我們的方法可以使文本到圖像擴散模型更易於使用,提供更好的用戶體驗,這證明了我們的方法有潛力進一步推動用戶友好型文本到圖像生成模型的發展,彌合簡單敘事提示和基於關鍵詞的複雜提示之間的語義差距。
English
Diffusion models, which have emerged to become popular text-to-image
generation models, can produce high-quality and content-rich images guided by
textual prompts. However, there are limitations to semantic understanding and
commonsense reasoning in existing models when the input prompts are concise
narrative, resulting in low-quality image generation. To improve the capacities
for narrative prompts, we propose a simple-yet-effective parameter-efficient
fine-tuning approach called the Semantic Understanding and Reasoning adapter
(SUR-adapter) for pre-trained diffusion models. To reach this goal, we first
collect and annotate a new dataset SURD which consists of more than 57,000
semantically corrected multi-modal samples. Each sample contains a simple
narrative prompt, a complex keyword-based prompt, and a high-quality image.
Then, we align the semantic representation of narrative prompts to the complex
prompts and transfer knowledge of large language models (LLMs) to our
SUR-adapter via knowledge distillation so that it can acquire the powerful
semantic understanding and reasoning capabilities to build a high-quality
textual semantic representation for text-to-image generation. We conduct
experiments by integrating multiple LLMs and popular pre-trained diffusion
models to show the effectiveness of our approach in enabling diffusion models
to understand and reason concise natural language without image quality
degradation. Our approach can make text-to-image diffusion models easier to use
with better user experience, which demonstrates our approach has the potential
for further advancing the development of user-friendly text-to-image generation
models by bridging the semantic gap between simple narrative prompts and
complex keyword-based prompts.