主动视频感知:面向智能体长视频理解的迭代式证据搜寻
Active Video Perception: Iterative Evidence Seeking for Agentic Long Video Understanding
December 5, 2025
作者: Ziyang Wang, Honglu Zhou, Shijie Wang, Junnan Li, Caiming Xiong, Silvio Savarese, Mohit Bansal, Michael S. Ryoo, Juan Carlos Niebles
cs.AI
摘要
长视频理解(LVU)面临的核心挑战在于:解答现实世界查询往往依赖于散落在数小时视频中、被大量冗余无关内容淹没的稀疏时空线索。尽管智能体流程能提升视频推理能力,但主流框架依赖与查询无关的视频描述器来感知信息,这既浪费计算资源处理无关内容,又模糊了细粒度时空信息。受主动感知理论启发,我们认为LVU智能体应主动决策观察内容、时机与位置,并持续评估当前观察是否足以回答问题。本文提出主动视频感知(AVP)框架,将视频视为交互环境,直接从像素中获取紧凑的查询相关证据。具体而言,AVP通过多模态大语言模型智能体运行"规划-观察-反思"的迭代流程:规划器每轮提出针对性视频交互方案,观察器执行操作并提取带时间戳的证据,反思器评估证据充分性——或终止流程输出答案,或触发新一轮观察。在五个LVU基准测试中,AVP以显著优势达到最高性能:平均准确率超越最佳智能体方法5.7%,同时仅需18.4%的推理时间和12.4%的输入令牌量。
English
Long video understanding (LVU) is challenging because answering real-world queries often depends on sparse, temporally dispersed cues buried in hours of mostly redundant and irrelevant content. While agentic pipelines improve video reasoning capabilities, prevailing frameworks rely on a query-agnostic captioner to perceive video information, which wastes computation on irrelevant content and blurs fine-grained temporal and spatial information. Motivated by active perception theory, we argue that LVU agents should actively decide what, when, and where to observe, and continuously assess whether the current observation is sufficient to answer the query. We present Active Video Perception (AVP), an evidence-seeking framework that treats the video as an interactive environment and acquires compact, queryrelevant evidence directly from pixels. Concretely, AVP runs an iterative plan-observe-reflect process with MLLM agents. In each round, a planner proposes targeted video interactions, an observer executes them to extract time-stamped evidence, and a reflector evaluates the sufficiency of the evidence for the query, either halting with an answer or triggering further observation. Across five LVU benchmarks, AVP achieves highest performance with significant improvements. Notably, AVP outperforms the best agentic method by 5.7% in average accuracy while only requires 18.4% inference time and 12.4% input tokens.