**模态并非生而平等:多模态大语言模型中融合机制的解析与架构设计**
Some Modalities are More Equal Than Others: Decoding and Architecting Multimodal Integration in MLLMs
November 28, 2025
作者: Tianle Chen, Chaitanya Chakka, Arjun Reddy Akula, Xavier Thomas, Deepti Ghadiyaram
cs.AI
摘要
尽管多模态大语言模型(MLLMs)取得了显著进展,但一个根本问题依然存在:MLLMs能否有效应对相互矛盾的模态信息?为系统研究该问题,我们推出MMA-Bench评估基准,包含用于探测模型模态依赖性的视频与任务组合。通过黑盒与白盒可解释性技术,我们对开源及闭源MLLMs的脆弱性展开关键性分析。研究表明,当前MLLMs在应对错位的视听配对及简单误导性文本时表现不佳,缺乏稳健的多模态推理能力。基于这些发现,我们提出模态对齐调优策略,指导模型何时应优先处理、利用或忽略特定模态线索。大量实验与分析表明,我们的对齐调优能显著增强多模态基础能力。本研究不仅提供了可解释性工具,更为开发具有本质可靠跨模态推理能力的MLLMs指明了清晰路径。代码与数据集将公开提供。
English
Despite remarkable advancements in Multimodal Large Language Models (MLLMs), a fundamental question remains: are MLLMs robust to contradicting modalities? To rigorously study this, we introduce MMA-Bench comprising videos and tasks that probe a model's reliance on specific modalities. Using black-box and white-box interpretability techniques, we provide a critical analysis of the brittleness of both open- and closed-sourced MLLMs. We show that current MLLMs struggle under misaligned audio-visual pairs and simple misleading text, thereby lacking robust multi-modal reasoning. Building on these findings, we propose a modality alignment tuning strategy to teach the model when to prioritize, leverage, or ignore specific modality cues. Through extensive experiments and analysis, we show that our alignment tuning yields demonstrably stronger multimodal grounding. This work provides both interpretability tools and a clear path toward developing MLLMs with intrinsically reliable cross-modal reasoning. Code and dataset will be publicly available.