ChatPaper.aiChatPaper

SUR-adapter:利用大语言模型增强文本到图像预训练扩散模型

SUR-adapter: Enhancing Text-to-Image Pre-trained Diffusion Models with Large Language Models

May 9, 2023
作者: Shanshan Zhong, Zhongzhan Huang, Wushao Wen, Jinghui Qin, Liang Lin
cs.AI

摘要

扩散模型已经成为流行的文本到图像生成模型,可以根据文本提示生成高质量且内容丰富的图像。然而,在现有模型中存在语义理解和常识推理方面的局限性,当输入提示为简洁叙述时,会导致图像生成质量低下。为了改进叙事提示的能力,我们提出了一种简单但有效的参数高效微调方法,称为语义理解和推理适配器(SUR-adapter),用于预训练的扩散模型。为了实现这一目标,我们首先收集并注释了一个新数据集SURD,其中包含超过57,000个语义校正的多模态样本。每个样本包含一个简单的叙事提示、一个复杂的基于关键词的提示和一个高质量图像。然后,我们将叙事提示的语义表示与复杂提示进行对齐,并通过知识蒸馏将大型语言模型(LLMs)的知识转移给我们的SUR-adapter,以便它可以获得强大的语义理解和推理能力,为文本到图像生成构建高质量的文本语义表示。我们通过集成多个LLMs和流行的预训练扩散模型进行实验,展示了我们的方法在使扩散模型能够理解和推理简洁自然语言而不降低图像质量方面的有效性。我们的方法可以使文本到图像扩散模型更易于使用,用户体验更佳,从而展示了我们的方法通过弥合简单叙事提示和复杂基于关键词提示之间的语义差距,有进一步推动用户友好的文本到图像生成模型发展的潜力。
English
Diffusion models, which have emerged to become popular text-to-image generation models, can produce high-quality and content-rich images guided by textual prompts. However, there are limitations to semantic understanding and commonsense reasoning in existing models when the input prompts are concise narrative, resulting in low-quality image generation. To improve the capacities for narrative prompts, we propose a simple-yet-effective parameter-efficient fine-tuning approach called the Semantic Understanding and Reasoning adapter (SUR-adapter) for pre-trained diffusion models. To reach this goal, we first collect and annotate a new dataset SURD which consists of more than 57,000 semantically corrected multi-modal samples. Each sample contains a simple narrative prompt, a complex keyword-based prompt, and a high-quality image. Then, we align the semantic representation of narrative prompts to the complex prompts and transfer knowledge of large language models (LLMs) to our SUR-adapter via knowledge distillation so that it can acquire the powerful semantic understanding and reasoning capabilities to build a high-quality textual semantic representation for text-to-image generation. We conduct experiments by integrating multiple LLMs and popular pre-trained diffusion models to show the effectiveness of our approach in enabling diffusion models to understand and reason concise natural language without image quality degradation. Our approach can make text-to-image diffusion models easier to use with better user experience, which demonstrates our approach has the potential for further advancing the development of user-friendly text-to-image generation models by bridging the semantic gap between simple narrative prompts and complex keyword-based prompts.
PDF22December 15, 2024