ChatPaper.aiChatPaper

智能体化超长视频理解

Agentic Very Long Video Understanding

January 26, 2026
作者: Aniket Rege, Arka Sadhu, Yuliang Li, Kejie Li, Ramya Korlakai Vinayak, Yuning Chai, Yong Jae Lee, Hyo Jin Kim
cs.AI

摘要

随着基于智能眼镜等全天候可穿戴设备实现的常开型个人AI助手问世,其对情境理解提出了更高要求——需要超越短暂孤立事件,实现对第一人称视角视频连续纵向流的整体把握。实现这一愿景需要长程视频理解技术的突破,即系统必须具备解读和回溯跨越数日甚至数周视觉与听觉信息的能力。现有方法(包括大语言模型和检索增强生成技术)受限于有限的上下文窗口,无法对超长视频流进行组合式多跳推理。本研究通过EGAgent这一以实体场景图为核心的增强型智能体框架应对上述挑战,该图表征了人员、场所、物体及其随时间推移的关联关系。我们的系统为规划智能体配备了结构化搜索与图推理工具,以及混合视听检索能力,从而实现细致入微的跨模态时序连贯推理。在EgoLifeQA和Video-MME(长程)数据集上的实验表明,本方法在复杂长程视频理解任务中,于EgoLifeQA达到57.5%的顶尖性能,在Video-MME(长程)上取得74.1%的竞争优势。
English
The advent of always-on personal AI assistants, enabled by all-day wearable devices such as smart glasses, demands a new level of contextual understanding, one that goes beyond short, isolated events to encompass the continuous, longitudinal stream of egocentric video. Achieving this vision requires advances in long-horizon video understanding, where systems must interpret and recall visual and audio information spanning days or even weeks. Existing methods, including large language models and retrieval-augmented generation, are constrained by limited context windows and lack the ability to perform compositional, multi-hop reasoning over very long video streams. In this work, we address these challenges through EGAgent, an enhanced agentic framework centered on entity scene graphs, which represent people, places, objects, and their relationships over time. Our system equips a planning agent with tools for structured search and reasoning over these graphs, as well as hybrid visual and audio search capabilities, enabling detailed, cross-modal, and temporally coherent reasoning. Experiments on the EgoLifeQA and Video-MME (Long) datasets show that our method achieves state-of-the-art performance on EgoLifeQA (57.5%) and competitive performance on Video-MME (Long) (74.1%) for complex longitudinal video understanding tasks.
PDF61January 28, 2026