ChatPaper.aiChatPaper

视频地图:对数计算下的长视频导航技术

VideoAtlas: Navigating Long-Form Video in Logarithmic Compute

March 18, 2026
作者: Mohamed Eltahir, Ali Habibullah, Yazan Alshoibi, Lama Ayash, Tanveer Hussain, Naeemullah Khan
cs.AI

摘要

将语言模型扩展至视频领域面临两大挑战:表征层面现有方法依赖有损近似,长上下文处理中基于字幕或智能体的流程会将视频压缩为文本导致视觉保真度下降。为此我们提出VideoAtlas——一种任务无关的环境,将视频表示为分层网格结构,兼具无损、可导航、可扩展、免字幕和免预处理特性。该系统支持全局概览,可递归放大任意区域,并使用统一视觉表征贯穿视频内容、中间分析及智能体记忆,实现端到端的无损处理。这种分层结构确保访问深度仅随视频长度呈对数增长。针对长上下文问题,递归语言模型(RLM)虽为长文本提供了解决方案,但其视觉领域扩展需要结构化递归环境,这正是VideoAtlas的核心价值。将VideoAtlas建模为马尔可夫决策过程,我们开发出Video-RLM:采用主从并行架构,主节点协调全局探索,从节点并发钻取指定区域以积累无损视觉证据。实验揭示三大发现:(1)计算量随视频时长呈对数增长,网格结构复用带来的30-60%多模态缓存命中率进一步优化效率;(2)通过限制最大探索深度实现环境预算机制,形成计算精度权衡的超参数;(3)涌现出自适应计算分配能力,可根据问题粒度动态调整。在从1小时到10小时的基准测试中,Video-RLM始终保持最佳的时长鲁棒性,精度衰减最小,证明结构化环境导航是实现可扩展视频理解的有效范式。
English
Extending language models to video introduces two challenges: representation, where existing methods rely on lossy approximations, and long-context, where caption- or agent-based pipelines collapse video into text and lose visual fidelity. To overcome this, we introduce VideoAtlas, a task-agnostic environment to represent video as a hierarchical grid that is simultaneously lossless, navigable, scalable, caption- and preprocessing-free. An overview of the video is available at a glance, and any region can be recursively zoomed into, with the same visual representation used uniformly for the video, intermediate investigations, and the agent's memory, eliminating lossy text conversion end-to-end. This hierarchical structure ensures access depth grows only logarithmically with video length. For long-context, Recursive Language Models (RLMs) recently offered a powerful solution for long text, but extending them to visual domain requires a structured environment to recurse into, which VideoAtlas provides. VideoAtlas as a Markov Decision Process unlocks Video-RLM: a parallel Master-Worker architecture where a Master coordinates global exploration while Workers concurrently drill into assigned regions to accumulate lossless visual evidence. We demonstrate three key findings: (1)~logarithmic compute growth with video duration, further amplified by a 30-60\% multimodal cache hit rate arising from the grid's structural reuse. (2)~environment budgeting, where bounding the maximum exploration depth provides a principled compute-accuracy hyperparameter. (3)~emergent adaptive compute allocation that scales with question granularity. When scaling from 1-hour to 10-hour benchmarks, Video-RLM remains the most duration-robust method with minimal accuracy degradation, demonstrating that structured environment navigation is a viable and scalable paradigm for video understanding.
PDF22March 20, 2026