ChatPaper.aiChatPaper

VideoAtlas:对数计算量下的长视频导航系统

VideoAtlas: Navigating Long-Form Video in Logarithmic Compute

March 18, 2026
作者: Mohamed Eltahir, Ali Habibullah, Yazan Alshoibi, Lama Ayash, Tanveer Hussain, Naeemullah Khan
cs.AI

摘要

将语言模型扩展至视频领域面临两大挑战:表征层面现有方法依赖有损近似处理,长上下文层面则因基于字幕或智能体的处理流程将视频压缩为文本而丧失视觉保真度。为此我们推出VideoAtlas——一个任务无关的框架,通过分层网格对视频进行无损、可导航、可扩展且无需字幕预处理的表征。该系统支持全局概览与任意区域递归缩放,视频内容、中间分析过程与智能体记忆采用统一视觉表征,实现端到端的无损处理。这种分层结构确保访问深度的增长仅与视频时长呈对数关系。针对长上下文问题,递归语言模型(RLM)虽为长文本提供了解决方案,但其视觉领域应用需依赖结构化递归环境,这正是VideoAtlas的核心价值。将VideoAtlas建模为马尔可夫决策过程,我们构建了Video-RLM并行主从架构:主节点协调全局探索,从节点同步钻取指定区域以积累无损视觉证据。实验揭示三大发现:(1)计算量随视频时长呈对数增长,网格结构复用带来的30-60%多模态缓存命中率进一步优化效率;(2)通过限制最大探索深度实现环境预算机制,形成计算精度权衡的超参数调控范式;(3)随问题粒度自适应分配计算资源的涌现能力。在从1小时到10小时的基准测试中,Video-RLM始终保持最优的时长鲁棒性,精度衰减最小,证明结构化环境导航是实现可扩展视频理解的有效范式。
English
Extending language models to video introduces two challenges: representation, where existing methods rely on lossy approximations, and long-context, where caption- or agent-based pipelines collapse video into text and lose visual fidelity. To overcome this, we introduce VideoAtlas, a task-agnostic environment to represent video as a hierarchical grid that is simultaneously lossless, navigable, scalable, caption- and preprocessing-free. An overview of the video is available at a glance, and any region can be recursively zoomed into, with the same visual representation used uniformly for the video, intermediate investigations, and the agent's memory, eliminating lossy text conversion end-to-end. This hierarchical structure ensures access depth grows only logarithmically with video length. For long-context, Recursive Language Models (RLMs) recently offered a powerful solution for long text, but extending them to visual domain requires a structured environment to recurse into, which VideoAtlas provides. VideoAtlas as a Markov Decision Process unlocks Video-RLM: a parallel Master-Worker architecture where a Master coordinates global exploration while Workers concurrently drill into assigned regions to accumulate lossless visual evidence. We demonstrate three key findings: (1)~logarithmic compute growth with video duration, further amplified by a 30-60\% multimodal cache hit rate arising from the grid's structural reuse. (2)~environment budgeting, where bounding the maximum exploration depth provides a principled compute-accuracy hyperparameter. (3)~emergent adaptive compute allocation that scales with question granularity. When scaling from 1-hour to 10-hour benchmarks, Video-RLM remains the most duration-robust method with minimal accuracy degradation, demonstrating that structured environment navigation is a viable and scalable paradigm for video understanding.
PDF22March 20, 2026