ChatPaper.aiChatPaper

MME-VideoOCR:評估多模態大語言模型在視頻場景中的OCR能力

MME-VideoOCR: Evaluating OCR-Based Capabilities of Multimodal LLMs in Video Scenarios

May 27, 2025
作者: Yang Shi, Huanqian Wang, Wulin Xie, Huanyao Zhang, Lijie Zhao, Yi-Fan Zhang, Xinfeng Li, Chaoyou Fu, Zhuoer Wen, Wenting Liu, Zhuoran Zhang, Xinlong Chen, Bohan Zeng, Sihan Yang, Yuanxing Zhang, Pengfei Wan, Haotian Wang, Wenjing Yang
cs.AI

摘要

多模态大語言模型(MLLMs)在靜態圖像的光學字符識別(OCR)任務中已取得顯著準確度。然而,由於視頻內容中固有的運動模糊、時間變化及視覺效果等因素,其在視頻OCR中的效能大幅降低。為提供更清晰的實用MLLMs訓練指導,我們引入了MME-VideoOCR基準,該基準涵蓋了廣泛的視頻OCR應用場景。MME-VideoOCR包含10個任務類別,共計25個獨立任務,並跨越44種多樣化場景。這些任務不僅限於文本識別,還涉及對視頻中文本內容的更深層次理解與推理。該基準由1,464段分辨率、寬高比和時長各異的視頻,以及2,000對精心策劃、人工標註的問答對組成。我們在MME-VideoOCR上評估了18個最先進的MLLMs,結果顯示,即使表現最佳的模型(Gemini-2.5 Pro)也僅達到73.7%的準確率。細粒度分析表明,現有MLLMs在相關文本集中於單一或少數幀的任務上表現強勁,但在需要整體視頻理解的任務上能力有限。這些限制在需要時空推理、跨幀信息整合或抵抗語言先驗偏見的場景中尤為明顯。我們的研究結果還強調了高分辨率視覺輸入和足夠的時間覆蓋對於動態視頻場景中可靠OCR的重要性。
English
Multimodal Large Language Models (MLLMs) have achieved considerable accuracy in Optical Character Recognition (OCR) from static images. However, their efficacy in video OCR is significantly diminished due to factors such as motion blur, temporal variations, and visual effects inherent in video content. To provide clearer guidance for training practical MLLMs, we introduce the MME-VideoOCR benchmark, which encompasses a comprehensive range of video OCR application scenarios. MME-VideoOCR features 10 task categories comprising 25 individual tasks and spans 44 diverse scenarios. These tasks extend beyond text recognition to incorporate deeper comprehension and reasoning of textual content within videos. The benchmark consists of 1,464 videos with varying resolutions, aspect ratios, and durations, along with 2,000 meticulously curated, manually annotated question-answer pairs. We evaluate 18 state-of-the-art MLLMs on MME-VideoOCR, revealing that even the best-performing model (Gemini-2.5 Pro) achieves an accuracy of only 73.7%. Fine-grained analysis indicates that while existing MLLMs demonstrate strong performance on tasks where relevant texts are contained within a single or few frames, they exhibit limited capability in effectively handling tasks that demand holistic video comprehension. These limitations are especially evident in scenarios that require spatio-temporal reasoning, cross-frame information integration, or resistance to language prior bias. Our findings also highlight the importance of high-resolution visual input and sufficient temporal coverage for reliable OCR in dynamic video scenarios.

Summary

AI-Generated Summary

PDF361May 28, 2025