ChatPaper.aiChatPaper

InternVL-U:民主化統一多模態模型,實現理解、推理、生成與編輯

InternVL-U: Democratizing Unified Multimodal Models for Understanding, Reasoning, Generation and Editing

March 10, 2026
作者: Changyao Tian, Danni Yang, Guanzhou Chen, Erfei Cui, Zhaokai Wang, Yuchen Duan, Penghao Yin, Sitao Chen, Ganlin Yang, Mingxin Liu, Zirun Zhu, Ziqian Fan, Leyao Gu, Haomin Wang, Qi Wei, Jinhui Yin, Xue Yang, Zhihang Zhong, Qi Qin, Yi Xin, Bin Fu, Yihao Liu, Jiaye Ge, Qipeng Guo, Gen Luo, Hongsheng Li, Yu Qiao, Kai Chen, Hongjie Zhang
cs.AI

摘要

整合理解、推理、生成與編輯能力的統一多模態模型,在維持強大語意理解與獲取卓越生成能力之間存在固有權衡。本報告提出InternVL-U,一個輕量級的40億參數統一多模態模型,旨在通用框架內實現這些能力的普及化。該模型以統一情境建模與解耦視覺表徵的模組化設計為指導原則,將頂尖多模態大型語言模型與專用的MMDiT視覺生成頭部模組相結合。為進一步彌合美學生成與高階智能之間的鴻溝,我們建構了針對高語意密度任務(如文字渲染與科學推理)的綜合數據合成流程,採用以推理為核心的思維鏈範式,將抽象用戶意圖與細粒度視覺生成細節更精準對齊。大量實驗表明,InternVL-U實現了卓越的性能-效率平衡:儘管僅使用40億參數,其在各類生成與編輯任務中持續超越規模超過其三倍的統一基準模型(如140億參數的BAGEL),同時保持強大的多模態理解與推理能力。
English
Unified multimodal models (UMMs) that integrate understanding, reasoning, generation, and editing face inherent trade-offs between maintaining strong semantic comprehension and acquiring powerful generation capabilities. In this report, we present InternVL-U, a lightweight 4B-parameter UMM that democratizes these capabilities within a unified framework. Guided by the principles of unified contextual modeling and modality-specific modular design with decoupled visual representations, InternVL-U integrates a state-of-the-art Multimodal Large Language Model (MLLM) with a specialized MMDiT-based visual generation head. To further bridge the gap between aesthetic generation and high-level intelligence, we construct a comprehensive data synthesis pipeline targeting high-semantic-density tasks, such as text rendering and scientific reasoning, under a reasoning-centric paradigm that leverages Chain-of-Thought (CoT) to better align abstract user intent with fine-grained visual generation details. Extensive experiments demonstrate that InternVL-U achieves a superior performance - efficiency balance. Despite using only 4B parameters, it consistently outperforms unified baseline models with over 3x larger scales such as BAGEL (14B) on various generation and editing tasks, while retaining strong multimodal understanding and reasoning capabilities.
PDF240March 12, 2026