ChatPaper.aiChatPaper

RealUnify:统一模型是否真正受益于整合?一项全面基准研究

RealUnify: Do Unified Models Truly Benefit from Unification? A Comprehensive Benchmark

September 29, 2025
作者: Yang Shi, Yuhao Dong, Yue Ding, Yuran Wang, Xuanyu Zhu, Sheng Zhou, Wenting Liu, Haochen Tian, Rundong Wang, Huanqian Wang, Zuyan Liu, Bohan Zeng, Ruizhe Chen, Qixun Wang, Zhuoran Zhang, Xinlong Chen, Chengzhuo Tong, Bozhou Li, Chaoyou Fu, Qiang Liu, Haotian Wang, Wenjing Yang, Yuanxing Zhang, Pengfei Wan, Yi-Fan Zhang, Ziwei Liu
cs.AI

摘要

将视觉理解与生成能力整合至统一的多模态模型中,标志着向通用人工智能迈出了重要一步。然而,现有基准测试尚未解答一个根本问题:这种架构上的统一是否真正促进了各组成能力之间的协同作用?当前评估范式主要孤立地考察理解与生成能力,不足以判断统一模型能否利用其理解能力提升生成效果,或通过生成模拟促进更深层次的理解。为填补这一关键空白,我们推出了RealUnify,一个专门设计用于评估双向能力协同的基准测试。RealUnify包含1000个经过人工精心标注的实例,涵盖10个类别和32个子任务,围绕两大核心轴构建:1)“理解促进生成”,要求通过推理(如常识、逻辑)指导图像生成;2)“生成促进理解”,需通过心理模拟或重建(如对变换或混乱视觉输入的处理)来解决推理任务。我们的一个关键贡献是双评估协议,它结合了直接的端到端评估与诊断性分步评估,后者将任务分解为独立的理解与生成阶段。这一协议使我们能精确识别性能瓶颈是源于核心能力的不足,还是整合能力的缺失。通过对12个领先的统一模型和6个专业基线模型的大规模评估,我们发现当前统一模型在实现有效协同方面仍面临挑战,表明仅靠架构统一是不够的。这些结果强调了开发新的训练策略和归纳偏置的必要性,以充分释放统一建模的潜力。
English
The integration of visual understanding and generation into unified multimodal models represents a significant stride toward general-purpose AI. However, a fundamental question remains unanswered by existing benchmarks: does this architectural unification actually enable synergetic interaction between the constituent capabilities? Existing evaluation paradigms, which primarily assess understanding and generation in isolation, are insufficient for determining whether a unified model can leverage its understanding to enhance its generation, or use generative simulation to facilitate deeper comprehension. To address this critical gap, we introduce RealUnify, a benchmark specifically designed to evaluate bidirectional capability synergy. RealUnify comprises 1,000 meticulously human-annotated instances spanning 10 categories and 32 subtasks. It is structured around two core axes: 1) Understanding Enhances Generation, which requires reasoning (e.g., commonsense, logic) to guide image generation, and 2) Generation Enhances Understanding, which necessitates mental simulation or reconstruction (e.g., of transformed or disordered visual inputs) to solve reasoning tasks. A key contribution is our dual-evaluation protocol, which combines direct end-to-end assessment with a diagnostic stepwise evaluation that decomposes tasks into distinct understanding and generation phases. This protocol allows us to precisely discern whether performance bottlenecks stem from deficiencies in core abilities or from a failure to integrate them. Through large-scale evaluations of 12 leading unified models and 6 specialized baselines, we find that current unified models still struggle to achieve effective synergy, indicating that architectural unification alone is insufficient. These results highlight the need for new training strategies and inductive biases to fully unlock the potential of unified modeling.
PDF412September 30, 2025