ChatPaper.aiChatPaper

V-ReasonBench:面向视频生成模型的统一推理基准测试套件

V-ReasonBench: Toward Unified Reasoning Benchmark Suite for Video Generation Models

November 20, 2025
作者: Yang Luo, Xuanlei Zhao, Baijiong Lin, Lingting Zhu, Liyao Tang, Yuqi Liu, Ying-Cong Chen, Shengju Qian, Xin Wang, Yang You
cs.AI

摘要

近期,如Veo-3等生成视频模型的进展展现了令人惊讶的零样本推理能力,这催生了对系统化、可靠评估方法的迫切需求。为此,我们推出了V-ReasonBench,一个旨在从四个关键维度评估视频推理能力的基准:结构化问题解决、空间认知、基于模式的推理以及物理动态理解。该基准集成了合成与真实世界的图像序列,提供了一系列答案可验证的任务,这些任务具有可重复性、可扩展性及明确性。对六种顶尖视频模型的评估揭示了各维度间的显著差异,特别是在结构化、空间、模式基础及物理推理方面表现出的强烈变化。我们进一步将视频模型与强大的图像模型进行对比,分析了常见的幻觉行为,并探讨了视频时长对帧链推理的影响。总体而言,V-ReasonBench为衡量视频推理提供了一个统一且可复现的框架,旨在支持开发具备更可靠、与人类思维对齐的推理能力的模型。
English
Recent progress in generative video models, such as Veo-3, has shown surprising zero-shot reasoning abilities, creating a growing need for systematic and reliable evaluation. We introduce V-ReasonBench, a benchmark designed to assess video reasoning across four key dimensions: structured problem-solving, spatial cognition, pattern-based inference, and physical dynamics. The benchmark is built from both synthetic and real-world image sequences and provides a diverse set of answer-verifiable tasks that are reproducible, scalable, and unambiguous. Evaluations of six state-of-the-art video models reveal clear dimension-wise differences, with strong variation in structured, spatial, pattern-based, and physical reasoning. We further compare video models with strong image models, analyze common hallucination behaviors, and study how video duration affects Chain-of-Frames reasoning. Overall, V-ReasonBench offers a unified and reproducible framework for measuring video reasoning and aims to support the development of models with more reliable, human-aligned reasoning skills.
PDF391November 22, 2025