探索大型多模態模型在視頻理解中的幻覺現象:基準、分析與緩解策略
Exploring Hallucination of Large Multimodal Models in Video Understanding: Benchmark, Analysis and Mitigation
March 25, 2025
作者: Hongcheng Gao, Jiashu Qu, Jingyi Tang, Baolong Bi, Yue Liu, Hongyu Chen, Li Liang, Li Su, Qingming Huang
cs.AI
摘要
大型多模态模型(LMMs)的幻觉问题,即提供看似正确实则错误的回答,限制了其可靠性和适用性。本文旨在研究LMMs在视频模态中的幻觉问题,相较于静态模态如图像和文本,视频模态更具动态性和挑战性。基于此动机,我们首先提出了一个名为HAVEN的综合基准,用于评估LMMs在视频理解任务中的幻觉。该基准从三个维度构建,即幻觉成因、幻觉方面和问题格式,共生成6K个问题。随后,我们通过16个LMMs在提出的基准上的实验,定量研究了影响幻觉的7个关键因素,如视频时长、模型规模和模型推理等。此外,受OpenAI o1等近期思维模型的启发,我们提出了一种视频思维模型,通过监督推理微调(SRFT)和直接偏好优化(TDPO)来缓解LMMs的幻觉问题——其中SRFT增强推理能力,而TDPO减少思维过程中的幻觉。大量实验和分析证明了该方法的有效性。值得注意的是,它在幻觉评估中的准确率比基线提高了7.65%,并将偏差分数降低了4.5%。代码和数据公开于https://github.com/Hongcheng-Gao/HAVEN。
English
The hallucination of large multimodal models (LMMs), providing responses that
appear correct but are actually incorrect, limits their reliability and
applicability. This paper aims to study the hallucination problem of LMMs in
video modality, which is dynamic and more challenging compared to static
modalities like images and text. From this motivation, we first present a
comprehensive benchmark termed HAVEN for evaluating hallucinations of LMMs in
video understanding tasks. It is built upon three dimensions, i.e.,
hallucination causes, hallucination aspects, and question formats, resulting in
6K questions. Then, we quantitatively study 7 influential factors on
hallucinations, e.g., duration time of videos, model sizes, and model
reasoning, via experiments of 16 LMMs on the presented benchmark. In addition,
inspired by recent thinking models like OpenAI o1, we propose a video-thinking
model to mitigate the hallucinations of LMMs via supervised reasoning
fine-tuning (SRFT) and direct preference optimization (TDPO)-- where SRFT
enhances reasoning capabilities while TDPO reduces hallucinations in the
thinking process. Extensive experiments and analyses demonstrate the
effectiveness. Remarkably, it improves the baseline by 7.65% in accuracy on
hallucination evaluation and reduces the bias score by 4.5%. The code and data
are public at https://github.com/Hongcheng-Gao/HAVEN.Summary
AI-Generated Summary