UNIMO-G:通过多模态条件扩散实现统一图像生成
UNIMO-G: Unified Image Generation through Multimodal Conditional Diffusion
January 24, 2024
作者: Wei Li, Xue Xu, Jiachen Liu, Xinyan Xiao
cs.AI
摘要
现有的文本到图像扩散模型主要是根据文本提示生成图像。然而,文本描述的简洁性在忠实合成具有复杂细节的图像方面存在挑战,比如特定实体或场景。本文提出了UNIMO-G,这是一个简单的多模态条件扩散框架,它在多模态提示上运行,其中包含交错的文本和视觉输入,展示了对文本驱动和主题驱动图像生成的统一能力。UNIMO-G包括两个核心组件:用于编码多模态提示的多模态大语言模型(MLLM),以及用于基于编码的多模态输入生成图像的条件去噪扩散网络。我们采用两阶段训练策略有效地训练该框架:首先在大规模文本-图像对上进行预训练,以发展条件图像生成能力,然后通过多模态提示进行指导微调,以实现统一的图像生成能力。采用了经过精心设计的数据处理流程,涉及语言基础和图像分割,用于构建多模态提示。UNIMO-G在文本到图像生成和零样本主题驱动合成方面表现出色,并且在生成涉及多个图像实体的复杂多模态提示时非常有效。
English
Existing text-to-image diffusion models primarily generate images from text
prompts. However, the inherent conciseness of textual descriptions poses
challenges in faithfully synthesizing images with intricate details, such as
specific entities or scenes. This paper presents UNIMO-G, a simple
multimodal conditional diffusion framework that operates on multimodal prompts
with interleaved textual and visual inputs, which demonstrates a unified
ability for both text-driven and subject-driven image generation. UNIMO-G
comprises two core components: a Multimodal Large Language Model (MLLM) for
encoding multimodal prompts, and a conditional denoising diffusion network for
generating images based on the encoded multimodal input. We leverage a
two-stage training strategy to effectively train the framework: firstly
pre-training on large-scale text-image pairs to develop conditional image
generation capabilities, and then instruction tuning with multimodal prompts to
achieve unified image generation proficiency. A well-designed data processing
pipeline involving language grounding and image segmentation is employed to
construct multi-modal prompts. UNIMO-G excels in both text-to-image generation
and zero-shot subject-driven synthesis, and is notably effective in generating
high-fidelity images from complex multimodal prompts involving multiple image
entities.