VideoEval-Pro:堅固且逼真的長視頻理解評估
VideoEval-Pro: Robust and Realistic Long Video Understanding Evaluation
May 20, 2025
作者: Wentao Ma, Weiming Ren, Yiming Jia, Zhuofeng Li, Ping Nie, Ge Zhang, Wenhu Chen
cs.AI
摘要
大型多模态模型(LMMs)近期已成为长视频理解(LVU)的强大工具,推动了标准化LVU基准的开发以评估其性能。然而,我们的研究揭示了现有LVU基准的一个严峻问题。首先,大多数现有基准严重依赖多项选择题(MCQs),其评估结果因猜测正确答案的可能性而被夸大;其次,这些基准中的相当一部分问题具有强烈的先验性,使得模型无需观看输入视频即可直接作答。例如,在Video-MME上,Gemini-1.5-Pro仅凭长视频中的随机帧就能达到超过50%的准确率。我们还观察到,增加帧数并不必然带来现有基准上的性能提升,这与直觉相悖。因此,当前LVU基准的有效性和鲁棒性受到削弱,阻碍了对LMMs长视频理解能力的真实评估。为解决这一问题,我们提出了VideoEval-Pro,一个包含开放式简答题的现实LVU基准,这些问题真正要求理解整个视频。VideoEval-Pro通过感知和推理任务评估片段级和全视频理解。通过评估21个专有和开源视频LMMs,我们得出以下结论:(1) 视频LMMs在开放式问题上的表现相比MCQs有显著下降(>25%);(2) 令人惊讶的是,更高的MCQ得分并未在VideoEval-Pro上带来更高的开放式得分;(3) 与其他MCQ基准相比,VideoEval-Pro更能从增加输入帧数中获益。我们的结果表明,VideoEval-Pro提供了更真实、可靠的长视频理解衡量标准,为该领域的进展提供了更清晰的视角。
English
Large multimodal models (LMMs) have recently emerged as a powerful tool for
long video understanding (LVU), prompting the development of standardized LVU
benchmarks to evaluate their performance. However, our investigation reveals a
rather sober lesson for existing LVU benchmarks. First, most existing
benchmarks rely heavily on multiple-choice questions (MCQs), whose evaluation
results are inflated due to the possibility of guessing the correct answer;
Second, a significant portion of questions in these benchmarks have strong
priors to allow models to answer directly without even reading the input video.
For example, Gemini-1.5-Pro can achieve over 50\% accuracy given a random frame
from a long video on Video-MME. We also observe that increasing the number of
frames does not necessarily lead to improvement on existing benchmarks, which
is counterintuitive. As a result, the validity and robustness of current LVU
benchmarks are undermined, impeding a faithful assessment of LMMs' long-video
understanding capability. To tackle this problem, we propose VideoEval-Pro, a
realistic LVU benchmark containing questions with open-ended short-answer,
which truly require understanding the entire video. VideoEval-Pro assesses both
segment-level and full-video understanding through perception and reasoning
tasks. By evaluating 21 proprietary and open-source video LMMs, we conclude the
following findings: (1) video LMMs show drastic performance (>25\%) drops on
open-ended questions compared with MCQs; (2) surprisingly, higher MCQ scores do
not lead to higher open-ended scores on VideoEval-Pro; (3) compared to other
MCQ benchmarks, VideoEval-Pro benefits more from increasing the number of input
frames. Our results show that VideoEval-Pro offers a more realistic and
reliable measure of long video understanding, providing a clearer view of
progress in this domain.Summary
AI-Generated Summary