模态整合:构建通用嵌入以实现高级多模态信息检索
Modality Curation: Building Universal Embeddings for Advanced Multimodal Information Retrieval
May 26, 2025
作者: Fanheng Kong, Jingyuan Zhang, Yahui Liu, Hongzhi Zhang, Shi Feng, Xiaocui Yang, Daling Wang, Yu Tian, Victoria W., Fuzheng Zhang, Guorui Zhou
cs.AI
摘要
多模态信息检索(MIR)因数据源的异构性和跨模态对齐的复杂性而面临固有挑战。尽管先前研究已识别出特征空间中的模态鸿沟,但系统性地应对这些挑战的方法仍待探索。在本研究中,我们提出了UNITE,一个通用框架,通过两个关键但尚未充分探索的方面——数据筛选和模态感知的训练配置——来应对这些挑战。我们的工作首次全面分析了模态特定数据属性如何影响多样化场景下的下游任务性能。此外,我们提出了模态感知掩码对比学习(MAMCL),以缓解不同模态实例间的竞争关系。我们的框架在多个多模态检索基准测试中取得了最先进的成果,显著超越了现有方法。通过大量实验,我们证明了策略性的模态筛选和定制化的训练协议对于稳健的跨模态表示学习至关重要。这项工作不仅提升了MIR性能,还为未来多模态系统研究提供了基础蓝图。我们的项目可在https://friedrichor.github.io/projects/UNITE 访问。
English
Multimodal information retrieval (MIR) faces inherent challenges due to the
heterogeneity of data sources and the complexity of cross-modal alignment.
While previous studies have identified modal gaps in feature spaces, a
systematic approach to address these challenges remains unexplored. In this
work, we introduce UNITE, a universal framework that tackles these challenges
through two critical yet underexplored aspects: data curation and
modality-aware training configurations. Our work provides the first
comprehensive analysis of how modality-specific data properties influence
downstream task performance across diverse scenarios. Moreover, we propose
Modal-Aware Masked Contrastive Learning (MAMCL) to mitigate the competitive
relationships among the instances of different modalities. Our framework
achieves state-of-the-art results on multiple multimodal retrieval benchmarks,
outperforming existing methods by notable margins. Through extensive
experiments, we demonstrate that strategic modality curation and tailored
training protocols are pivotal for robust cross-modal representation learning.
This work not only advances MIR performance but also provides a foundational
blueprint for future research in multimodal systems. Our project is available
at https://friedrichor.github.io/projects/UNITE.Summary
AI-Generated Summary