ChatPaper.aiChatPaper

MoMA:用於快速個性化圖像生成的多模態LLM適配器

MoMA: Multimodal LLM Adapter for Fast Personalized Image Generation

April 8, 2024
作者: Kunpeng Song, Yizhe Zhu, Bingchen Liu, Qing Yan, Ahmed Elgammal, Xiao Yang
cs.AI

摘要

本文介紹了MoMA:一種開放詞彙、無需訓練的個性化圖像模型,具有靈活的零樣本能力。隨著基礎文本到圖像模型迅速演進,對強大的圖像到圖像翻譯的需求也在增加。為了滿足這一需求,MoMA專注於以主題驅動的個性化圖像生成。我們利用開源的多模態大型語言模型(MLLM)訓練MoMA,使其同時兼具特徵提取器和生成器的雙重角色。這種方法有效地將參考圖像和文本提示信息相結合,產生有價值的圖像特徵,促進圖像擴散模型。為了更好地利用生成的特徵,我們進一步引入了一種新穎的自注意力快捷方法,有效地將圖像特徵轉移到圖像擴散模型,提高生成圖像中目標物體的相似性。顯著的是,作為一個調整自由的即插即用模塊,我們的模型僅需一個參考圖像,就能在生成具有高細節保真度、增強身份保留和提示忠實度的圖像方面勝過現有方法。我們的工作是開源的,從而普遍提供對這些進展的訪問。
English
In this paper, we present MoMA: an open-vocabulary, training-free personalized image model that boasts flexible zero-shot capabilities. As foundational text-to-image models rapidly evolve, the demand for robust image-to-image translation grows. Addressing this need, MoMA specializes in subject-driven personalized image generation. Utilizing an open-source, Multimodal Large Language Model (MLLM), we train MoMA to serve a dual role as both a feature extractor and a generator. This approach effectively synergizes reference image and text prompt information to produce valuable image features, facilitating an image diffusion model. To better leverage the generated features, we further introduce a novel self-attention shortcut method that efficiently transfers image features to an image diffusion model, improving the resemblance of the target object in generated images. Remarkably, as a tuning-free plug-and-play module, our model requires only a single reference image and outperforms existing methods in generating images with high detail fidelity, enhanced identity-preservation and prompt faithfulness. Our work is open-source, thereby providing universal access to these advancements.

Summary

AI-Generated Summary

PDF152December 15, 2024