探索大型多模态模型在视频理解中的幻觉现象:基准、分析与缓解策略
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation
March 25, 2025
作者: Hongcheng Gao, Jiashu Qu, Jingyi Tang, Baolong Bi, Yue Liu, Hongyu Chen, Li Liang, Li Su, Qingming Huang
cs.AI
摘要
大型多模态模型(LMMs)的幻觉问题,即提供看似正确实则错误的回答,限制了其可靠性和应用范围。本文旨在研究LMMs在视频模态中的幻觉问题,相较于静态模态如图像和文本,视频模态更为动态且更具挑战性。基于此动机,我们首先提出了一个全面的基准测试HAVEN,用于评估LMMs在视频理解任务中的幻觉现象。该基准从三个维度构建,即幻觉成因、幻觉表现及问题形式,共包含6K个问题。随后,我们通过16个LMMs在基准上的实验,定量研究了影响幻觉的7个关键因素,如视频时长、模型规模及模型推理能力等。此外,受OpenAI o1等最新思维模型启发,我们提出了一种视频思维模型,通过监督推理微调(SRFT)和直接偏好优化(TDPO)来缓解LMMs的幻觉问题——其中SRFT增强推理能力,而TDPO则在思维过程中减少幻觉。大量实验与分析验证了该模型的有效性。显著的是,它在幻觉评估的准确率上较基线提升了7.65%,并将偏差分数降低了4.5%。代码与数据已公开于https://github.com/Hongcheng-Gao/HAVEN。
English
The hallucination of large multimodal models (LMMs), providing responses that
appear correct but are actually incorrect, limits their reliability and
applicability. This paper aims to study the hallucination problem of LMMs in
video modality, which is dynamic and more challenging compared to static
modalities like images and text. From this motivation, we first present a
comprehensive benchmark termed HAVEN for evaluating hallucinations of LMMs in
video understanding tasks. It is built upon three dimensions, i.e.,
hallucination causes, hallucination aspects, and question formats, resulting in
6K questions. Then, we quantitatively study 7 influential factors on
hallucinations, e.g., duration time of videos, model sizes, and model
reasoning, via experiments of 16 LMMs on the presented benchmark. In addition,
inspired by recent thinking models like OpenAI o1, we propose a video-thinking
model to mitigate the hallucinations of LMMs via supervised reasoning
fine-tuning (SRFT) and direct preference optimization (TDPO)-- where SRFT
enhances reasoning capabilities while TDPO reduces hallucinations in the
thinking process. Extensive experiments and analyses demonstrate the
effectiveness. Remarkably, it improves the baseline by 7.65% in accuracy on
hallucination evaluation and reduces the bias score by 4.5%. The code and data
are public at https://github.com/Hongcheng-Gao/HAVEN.