ChatPaper.aiChatPaper

MoMA:用于快速个性化图像生成的多模态LLM适配器

MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation

April 8, 2024
作者: Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang
cs.AI

摘要

本文介绍了MoMA:一种开放词汇、无需训练的个性化图像模型,具有灵活的零样本能力。随着基础文本到图像模型的快速发展,对强大的图像到图像翻译的需求不断增长。为了满足这一需求,MoMA专注于以主题驱动的个性化图像生成。利用开源的多模态大型语言模型(MLLM),我们训练MoMA扮演双重角色,既是特征提取器又是生成器。这种方法有效地将参考图像和文本提示信息相结合,产生有价值的图像特征,促进图像扩散模型。为了更好地利用生成的特征,我们进一步引入了一种新颖的自注意力快捷方式,将图像特征有效地传输到图像扩散模型,提高生成图像中目标对象的相似性。显著地,作为一个无需调整的即插即用模块,我们的模型仅需要一张参考图像,就能在生成的图像中表现出高细节保真度、增强的身份保留和提示忠实度,胜过现有方法。我们的工作是开源的,从而为这些进展提供了普遍访问。
English
In this paper, we present MoMA: an open-vocabulary, training-free personalized image model that boasts flexible zero-shot capabilities. As foundational text-to-image models rapidly evolve, the demand for robust image-to-image translation grows. Addressing this need, MoMA specializes in subject-driven personalized image generation. Utilizing an open-source, Multimodal Large Language Model (MLLM), we train MoMA to serve a dual role as both a feature extractor and a generator. This approach effectively synergizes reference image and text prompt information to produce valuable image features, facilitating an image diffusion model. To better leverage the generated features, we further introduce a novel self-attention shortcut method that efficiently transfers image features to an image diffusion model, improving the resemblance of the target object in generated images. Remarkably, as a tuning-free plug-and-play module, our model requires only a single reference image and outperforms existing methods in generating images with high detail fidelity, enhanced identity-preservation and prompt faithfulness. Our work is open-source, thereby providing universal access to these advancements.

Summary

AI-Generated Summary

PDF152December 15, 2024