ChatPaper.aiChatPaper

统一多模态理解与生成模型:进展、挑战与机遇

Unified Multimodal Understanding and Generation Models: Advances, Challenges, and Opportunities

May 5, 2025
作者: Xinjie Zhang, Jintao Guo, Shanshan Zhao, Minghao Fu, Lunhao Duan, Guo-Hua Wang, Qing-Guo Chen, Zhao Xu, Weihua Luo, Kaifu Zhang
cs.AI

摘要

近年来,多模态理解模型与图像生成模型均取得了显著进展。尽管各自领域成就斐然,这两大方向却独立发展,形成了截然不同的架构范式:自回归架构在多模态理解中占据主导,而扩散模型则成为图像生成的基石。近期,开发统一框架以整合这些任务的研究兴趣日益浓厚,GPT-4o新能力的涌现正是这一趋势的体现,彰显了统一化的潜力。然而,两大领域间的架构差异带来了显著挑战。为清晰梳理当前统一化努力的脉络,我们呈现了一份全面综述,旨在指引未来研究。首先,我们介绍了多模态理解与文本到图像生成模型的基础概念及最新进展。随后,我们回顾了现有的统一模型,将其划分为三大主要架构范式:基于扩散的、基于自回归的,以及融合自回归与扩散机制的混合方法。针对每一类别,我们剖析了相关工作的结构设计与创新之处。此外,我们汇编了专为统一模型定制的数据集与基准测试,为未来探索提供资源。最后,我们探讨了这一新兴领域面临的关键挑战,包括标记化策略、跨模态注意力机制及数据问题。鉴于该领域尚处初期,我们预期将见证快速进展,并将定期更新本综述。我们的目标是激发进一步研究,并为学术界提供宝贵的参考。本综述的相关参考文献可在GitHub上获取(https://github.com/AIDC-AI/Awesome-Unified-Multimodal-Models)。
English
Recent years have seen remarkable progress in both multimodal understanding models and image generation models. Despite their respective successes, these two domains have evolved independently, leading to distinct architectural paradigms: While autoregressive-based architectures have dominated multimodal understanding, diffusion-based models have become the cornerstone of image generation. Recently, there has been growing interest in developing unified frameworks that integrate these tasks. The emergence of GPT-4o's new capabilities exemplifies this trend, highlighting the potential for unification. However, the architectural differences between the two domains pose significant challenges. To provide a clear overview of current efforts toward unification, we present a comprehensive survey aimed at guiding future research. First, we introduce the foundational concepts and recent advancements in multimodal understanding and text-to-image generation models. Next, we review existing unified models, categorizing them into three main architectural paradigms: diffusion-based, autoregressive-based, and hybrid approaches that fuse autoregressive and diffusion mechanisms. For each category, we analyze the structural designs and innovations introduced by related works. Additionally, we compile datasets and benchmarks tailored for unified models, offering resources for future exploration. Finally, we discuss the key challenges facing this nascent field, including tokenization strategy, cross-modal attention, and data. As this area is still in its early stages, we anticipate rapid advancements and will regularly update this survey. Our goal is to inspire further research and provide a valuable reference for the community. The references associated with this survey are available on GitHub (https://github.com/AIDC-AI/Awesome-Unified-Multimodal-Models).

Summary

AI-Generated Summary

PDF594May 8, 2025