使用多模态语言模型生成图像
Generating Images with Multimodal Language Models
May 26, 2023
作者: Jing Yu Koh, Daniel Fried, Ruslan Salakhutdinov
cs.AI
摘要
我们提出了一种方法,将冻结的纯文本大型语言模型(LLMs)与预训练的图像编码器和解码器模型融合,通过在它们的嵌入空间之间进行映射。我们的模型展示了广泛的多模态能力:图像检索、新颖图像生成和多模态对话。我们的方法是第一种能够在任意交错的图像和文本输入上进行条件生成连贯图像(和文本)输出的方法。为了在图像生成上取得强大的性能,我们提出了一个高效的映射网络,将LLM与现成的文本到图像生成模型进行连接。这个映射网络将文本的隐藏表示转换为视觉模型的嵌入空间,使我们能够利用LLM的强大文本表示来生成视觉输出。我们的方法在长且复杂语言任务上优于基准生成模型。除了新颖图像生成,我们的模型还能够从预定义数据集中检索图像,并在推断时决定是检索还是生成。这是通过一个学习的决策模块完成的,该模块根据LLM的隐藏表示进行条件设定。与先前的多模态语言模型相比,我们的模型展示了更广泛的能力范围。它可以处理图像和文本输入,并产生检索到的图像、生成的图像和生成的文本,优于非LLM的生成模型在几个衡量上下文依赖性的文本到图像任务中。
English
We propose a method to fuse frozen text-only large language models (LLMs)
with pre-trained image encoder and decoder models, by mapping between their
embedding spaces. Our model demonstrates a wide suite of multimodal
capabilities: image retrieval, novel image generation, and multimodal dialogue.
Ours is the first approach capable of conditioning on arbitrarily interleaved
image and text inputs to generate coherent image (and text) outputs. To achieve
strong performance on image generation, we propose an efficient mapping network
to ground the LLM to an off-the-shelf text-to-image generation model. This
mapping network translates hidden representations of text into the embedding
space of the visual models, enabling us to leverage the strong text
representations of the LLM for visual outputs. Our approach outperforms
baseline generation models on tasks with longer and more complex language. In
addition to novel image generation, our model is also capable of image
retrieval from a prespecified dataset, and decides whether to retrieve or
generate at inference time. This is done with a learnt decision module which
conditions on the hidden representations of the LLM. Our model exhibits a wider
range of capabilities compared to prior multimodal language models. It can
process image-and-text inputs, and produce retrieved images, generated images,
and generated text -- outperforming non-LLM based generation models across
several text-to-image tasks that measure context dependence.