ChatPaper.aiChatPaper

视频推理:通过迷宫求解任务首次评估视频模型的推理能力

Reasoning via Video: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks

November 19, 2025
作者: Cheng Yang, Haiyuan Wan, Yiran Peng, Xin Cheng, Zhaoyang Yu, Jiayi Zhang, Junchi Yu, Xinlei Yu, Xiawu Zheng, Dongzhan Zhou, Chenglin Wu
cs.AI

摘要

视频模型已在生成高保真度视频与连贯运动动态方面取得显著成就。类比语言建模从文本生成到基于文本推理的发展历程,视频模型的进步促使我们思考:视频模型能否通过视频生成进行推理?与离散的文本语料相比,视频将推理锚定于显式的空间布局与时间连续性中,这使其成为空间推理的理想载体。本文探索"通过视频推理"的新范式,推出VR-Bench——一个系统化评估视频模型推理能力的综合基准。该基准以 inherently 需要空间规划与多步推理的迷宫求解任务为基础,包含5种迷宫类型、多种视觉风格下程序化生成的7,920个视频。实证分析表明,监督微调能高效激发视频模型的推理潜能。视频模型在推理过程中展现出更强的空间感知能力,其表现超越主流视觉语言模型,并能泛化至多样场景、任务及复杂度层级。我们还发现测试时扩展效应:推理阶段采用多样化采样可使推理可靠性提升10%-20%。这些发现凸显了"通过视频推理"范式在空间推理任务中的独特潜力与可扩展性。
English
Video Models have achieved remarkable success in high-fidelity video generation with coherent motion dynamics. Analogous to the development from text generation to text-based reasoning in language modeling, the development of video models motivates us to ask: Can video models reason via video generation? Compared with the discrete text corpus, video grounds reasoning in explicit spatial layouts and temporal continuity, which serves as an ideal substrate for spatial reasoning. In this work, we explore the reasoning via video paradigm and introduce VR-Bench -- a comprehensive benchmark designed to systematically evaluate video models' reasoning capabilities. Grounded in maze-solving tasks that inherently require spatial planning and multi-step reasoning, VR-Bench contains 7,920 procedurally generated videos across five maze types and diverse visual styles. Our empirical analysis demonstrates that SFT can efficiently elicit the reasoning ability of video model. Video models exhibit stronger spatial perception during reasoning, outperforming leading VLMs and generalizing well across diverse scenarios, tasks, and levels of complexity. We further discover a test-time scaling effect, where diverse sampling during inference improves reasoning reliability by 10--20%. These findings highlight the unique potential and scalability of reasoning via video for spatial reasoning tasks.
PDF734December 2, 2025