Video-MME:多模式LLM在視頻分析中的首個全面評估基準。
Video-MME: The First-Ever Comprehensive Evaluation Benchmark of Multi-modal LLMs in Video Analysis
May 31, 2024
作者: Chaoyou Fu, Yuhan Dai, Yondong Luo, Lei Li, Shuhuai Ren, Renrui Zhang, Zihan Wang, Chenyu Zhou, Yunhang Shen, Mengdan Zhang, Peixian Chen, Yanwei Li, Shaohui Lin, Sirui Zhao, Ke Li, Tong Xu, Xiawu Zheng, Enhong Chen, Rongrong Ji, Xing Sun
cs.AI
摘要
在追求人工通用智能的過程中,多模式大型語言模型(MLLMs)已成為近期進展的焦點。然而,主要關注仍然集中在發展它們在靜態圖像理解方面的能力上。MLLMs在處理序列視覺數據方面的潛力仍未得到充分探索,突顯了對其性能缺乏全面、高質量評估的不足。本文介紹了Video-MME,這是首個全譜多模式評估基準,用於MLLMs在視頻分析中。我們的工作通過四個關鍵特點與現有基準有所區別:1)視頻類型的多樣性,涵蓋了6個主要視覺領域,30個子領域,以確保廣泛的場景泛化性;2)時間維度的持續性,包括短、中、長期視頻,從11秒到1小時不等,以應對強大的情境動態;3)數據模態的廣度,整合了除視頻幀外的多模式輸入,包括字幕和音頻,以揭示MLLMs的全面能力;4)標註的質量,利用專家標註者進行嚴格手動標註,以促進精確可靠的模型評估。我們手動選擇了900個視頻,總計256小時,並通過反复觀看所有視頻內容進行標註,結果產生了2700個問答對。通過Video-MME,我們廣泛評估了各種最先進的MLLMs,包括GPT-4系列和Gemini 1.5 Pro,以及像InternVL-Chat-V1.5和LLaVA-NeXT-Video這樣的開源圖像模型和視頻模型。我們的實驗顯示Gemini 1.5 Pro是表現最佳的商業模型,明顯優於開源模型。我們的數據集以及這些發現強調了在處理更長序列和多模式數據方面進一步改進的必要性。項目頁面:https://video-mme.github.io
English
In the quest for artificial general intelligence, Multi-modal Large Language
Models (MLLMs) have emerged as a focal point in recent advancements. However,
the predominant focus remains on developing their capabilities in static image
understanding. The potential of MLLMs in processing sequential visual data is
still insufficiently explored, highlighting the absence of a comprehensive,
high-quality assessment of their performance. In this paper, we introduce
Video-MME, the first-ever full-spectrum, Multi-Modal Evaluation benchmark of
MLLMs in Video analysis. Our work distinguishes from existing benchmarks
through four key features: 1) Diversity in video types, spanning 6 primary
visual domains with 30 subfields to ensure broad scenario generalizability; 2)
Duration in temporal dimension, encompassing both short-, medium-, and
long-term videos, ranging from 11 seconds to 1 hour, for robust contextual
dynamics; 3) Breadth in data modalities, integrating multi-modal inputs besides
video frames, including subtitles and audios, to unveil the all-round
capabilities of MLLMs; 4) Quality in annotations, utilizing rigorous manual
labeling by expert annotators to facilitate precise and reliable model
assessment. 900 videos with a total of 256 hours are manually selected and
annotated by repeatedly viewing all the video content, resulting in 2,700
question-answer pairs. With Video-MME, we extensively evaluate various
state-of-the-art MLLMs, including GPT-4 series and Gemini 1.5 Pro, as well as
open-source image models like InternVL-Chat-V1.5 and video models like
LLaVA-NeXT-Video. Our experiments reveal that Gemini 1.5 Pro is the
best-performing commercial model, significantly outperforming the open-source
models. Our dataset along with these findings underscores the need for further
improvements in handling longer sequences and multi-modal data. Project Page:
https://video-mme.github.ioSummary
AI-Generated Summary