ChatPaper.aiChatPaper

MME-VideoOCR:评估多模态大语言模型在视频场景中的OCR能力

MME-VideoOCR: Evaluating OCR-Based Capabilities of Multimodal LLMs in Video Scenarios

May 27, 2025
作者: Yang Shi, Huanqian Wang, Wulin Xie, Huanyao Zhang, Lijie Zhao, Yi-Fan Zhang, Xinfeng Li, Chaoyou Fu, Zhuoer Wen, Wenting Liu, Zhuoran Zhang, Xinlong Chen, Bohan Zeng, Sihan Yang, Yuanxing Zhang, Pengfei Wan, Haotian Wang, Wenjing Yang
cs.AI

摘要

多模态大语言模型(MLLMs)在静态图像的光学字符识别(OCR)任务中已取得显著精度。然而,在视频OCR领域,其效能因运动模糊、时间变化及视频内容固有的视觉特效等因素而大幅降低。为更明确地指导实用型MLLMs的训练,我们推出了MME-VideoOCR基准测试,该测试囊括了视频OCR应用的广泛场景。MME-VideoOCR包含10个任务类别,共计25项具体任务,覆盖44种多样化情境。这些任务不仅限于文本识别,还深入涉及视频中文本内容的理解与推理。基准测试由1,464段分辨率、宽高比及时长各异的视频组成,并配有2,000对精心挑选、人工标注的问答对。我们对18个前沿MLLMs在MME-VideoOCR上的表现进行了评估,结果显示,即便是表现最佳的模型(Gemini-2.5 Pro),其准确率也仅为73.7%。细粒度分析表明,现有MLLMs在处理相关文本集中于单一或少数帧的任务时表现强劲,但在需要全面视频理解的任务上能力有限,特别是在要求时空推理、跨帧信息整合或抵御语言先验偏见的场景中尤为明显。我们的发现还强调了高分辨率视觉输入和充足时间覆盖对于动态视频场景中可靠OCR的重要性。
English
Multimodal Large Language Models (MLLMs) have achieved considerable accuracy in Optical Character Recognition (OCR) from static images. However, their efficacy in video OCR is significantly diminished due to factors such as motion blur, temporal variations, and visual effects inherent in video content. To provide clearer guidance for training practical MLLMs, we introduce the MME-VideoOCR benchmark, which encompasses a comprehensive range of video OCR application scenarios. MME-VideoOCR features 10 task categories comprising 25 individual tasks and spans 44 diverse scenarios. These tasks extend beyond text recognition to incorporate deeper comprehension and reasoning of textual content within videos. The benchmark consists of 1,464 videos with varying resolutions, aspect ratios, and durations, along with 2,000 meticulously curated, manually annotated question-answer pairs. We evaluate 18 state-of-the-art MLLMs on MME-VideoOCR, revealing that even the best-performing model (Gemini-2.5 Pro) achieves an accuracy of only 73.7%. Fine-grained analysis indicates that while existing MLLMs demonstrate strong performance on tasks where relevant texts are contained within a single or few frames, they exhibit limited capability in effectively handling tasks that demand holistic video comprehension. These limitations are especially evident in scenarios that require spatio-temporal reasoning, cross-frame information integration, or resistance to language prior bias. Our findings also highlight the importance of high-resolution visual input and sufficient temporal coverage for reliable OCR in dynamic video scenarios.

Summary

AI-Generated Summary

PDF301May 28, 2025