多模态大語言模型深受模態偏差影響
MLLMs are Deeply Affected by Modality Bias
May 24, 2025
作者: Xu Zheng, Chenfei Liao, Yuqian Fu, Kaiyu Lei, Yuanhuiyi Lyu, Lutao Jiang, Bin Ren, Jialei Chen, Jiawen Wang, Chengxin Li, Linfeng Zhang, Danda Pani Paudel, Xuanjing Huang, Yu-Gang Jiang, Nicu Sebe, Dacheng Tao, Luc Van Gool, Xuming Hu
cs.AI
摘要
多模態大型語言模型(MLLMs)的最新進展在整合文本和圖像等多樣化模態方面展現了令人鼓舞的成果。然而,MLLMs深受模態偏見的影響,往往過度依賴語言而未能充分利用視覺輸入等其他模態。本立場文件論證了MLLMs如何被模態偏見深刻影響。首先,我們診斷了當前模態偏見的現狀,強調其在各類任務中的表現。其次,我們提出了一個與MLLMs中模態偏見相關的系統性研究路線圖。第三,我們識別了MLLMs中模態偏見的關鍵因素,並為未來研究提供了可操作性的建議以減輕其影響。為證實這些發現,我們進行了實驗,展示了每個因素的影響:1. 數據特性:語言數據緊湊且抽象,而視覺數據冗餘且複雜,這在學習動態中造成了固有的不平衡。2. 不平衡的骨幹能力:預訓練語言模型在MLLMs中的主導地位導致了對語言的過度依賴和對視覺信息的忽視。3. 訓練目標:當前的訓練目標往往未能促進平衡的跨模態對齊,導致了偏向語言的捷徑學習。這些發現強調了需要平衡的訓練策略和模型架構,以更好地在MLLMs中整合多種模態。我們呼籲跨學科的努力來應對這些挑戰,並推動MLLM研究的創新。我們的工作為MLLMs中的模態偏見提供了新的視角,並為開發更為健壯和可泛化的多模態系統提供了洞見,從而推動了向人工通用智能的進步。
English
Recent advances in Multimodal Large Language Models (MLLMs) have shown
promising results in integrating diverse modalities such as texts and images.
MLLMs are heavily influenced by modality bias, often relying on language while
under-utilizing other modalities like visual inputs. This position paper argues
that MLLMs are deeply affected by modality bias. Firstly, we diagnose the
current state of modality bias, highlighting its manifestations across various
tasks. Secondly, we propose a systematic research road-map related to modality
bias in MLLMs. Thirdly, we identify key factors of modality bias in MLLMs and
offer actionable suggestions for future research to mitigate it. To
substantiate these findings, we conduct experiments that demonstrate the
influence of each factor: 1. Data Characteristics: Language data is compact and
abstract, while visual data is redundant and complex, creating an inherent
imbalance in learning dynamics. 2. Imbalanced Backbone Capabilities: The
dominance of pretrained language models in MLLMs leads to overreliance on
language and neglect of visual information. 3. Training Objectives: Current
objectives often fail to promote balanced cross-modal alignment, resulting in
shortcut learning biased toward language. These findings highlight the need for
balanced training strategies and model architectures to better integrate
multiple modalities in MLLMs. We call for interdisciplinary efforts to tackle
these challenges and drive innovation in MLLM research. Our work provides a
fresh perspective on modality bias in MLLMs and offers insights for developing
more robust and generalizable multimodal systems-advancing progress toward
Artificial General Intelligence.Summary
AI-Generated Summary