FINER:多模态大模型在细粒度否定查询下的幻觉现象
FINER: MLLMs Hallucinate under Fine-grained Negative Queries
March 18, 2026
作者: Rui Xiao, Sanghwan Kim, Yongqin Xian, Zeynep Akata, Stephan Alaniz
cs.AI
摘要
多模态大语言模型(MLLMs)在细粒度查询上存在幻觉问题,而现有基准测试多关注粗粒度图像问题,未能充分体现这一挑战。我们提出细粒度负向查询框架FINER,并构建FINER-CompreCap与FINER-DOCCI两个基准集。基于FINER,我们从多对象、多属性、多关系和“是什么”四类场景系统分析幻觉现象。实验表明,当细粒度失配与图像中真实存在的元素同时出现时,MLLMs易产生幻觉。为此,我们提出FINER-Tuning方法,利用FINER启发的数据实施直接偏好优化(DPO)。对四个前沿MLLMs进行FINER调优后,在基准测试中最高可降低24.2%的幻觉率(InternVL3.5-14B),同时在八个现有幻觉测试集上表现提升,并在六个多模态基准中增强通用能力。代码、基准数据及模型详见https://explainableml.github.io/finer-project/。
English
Multimodal large language models (MLLMs) struggle with hallucinations, particularly with fine-grained queries, a challenge underrepresented by existing benchmarks that focus on coarse image-related questions. We introduce FIne-grained NEgative queRies (FINER), alongside two benchmarks: FINER-CompreCap and FINER-DOCCI. Using FINER, we analyze hallucinations across four settings: multi-object, multi-attribute, multi-relation, and ``what'' questions. Our benchmarks reveal that MLLMs hallucinate when fine-grained mismatches co-occur with genuinely present elements in the image. To address this, we propose FINER-Tuning, leveraging Direct Preference Optimization (DPO) on FINER-inspired data. Finetuning four frontier MLLMs with FINER-Tuning yields up to 24.2\% gains (InternVL3.5-14B) on hallucinations from our benchmarks, while simultaneously improving performance on eight existing hallucination suites, and enhancing general multimodal capabilities across six benchmarks. Code, benchmark, and models are available at https://explainableml.github.io/finer-project/{https://explainableml.github.io/finer-project/}.