ChatPaper.aiChatPaper

分而治之:基于查询类型自适应帧选择的长视频理解方法 (注:该标题采用学术论文常见的双标题形式,前半部分"分而治之"巧妙对应英文"Divide, then Ground"的核心理念,后半部分明确点出"长视频理解"的研究领域和"自适应帧选择"的技术创新点,同时通过"基于查询类型"准确传达原文"Adapting...to Query Types"的方法适配思想。整体既保持学术严谨性,又符合中文表达习惯。)

Divide, then Ground: Adapting Frame Selection to Query Types for Long-Form Video Understanding

December 3, 2025
作者: Jialuo Li, Bin Li, Jiahao Li, Yan Lu
cs.AI

摘要

大规模多模态模型在长视频理解中的应用受限于有限的上下文长度及密集视频帧处理的高计算成本。当前研究多集中于查询感知的帧选择方法,但这些方法常伴随显著的计算开销。本文质疑了此类复杂搜索机制普遍必要的假设,首先提出并验证了区分全局查询与局部化查询的类型学框架。研究表明,均匀采样对全局查询既高效又有效,而局部化查询确实需要查询感知选择才能达到最优性能。基于此发现,我们提出无需训练的帧选择框架DIG,该框架能根据查询类型自适应调整策略:对全局查询采用高效均匀采样,对局部化查询则启动专用流程提取查询相关帧。在三个长视频理解基准测试上的实验表明,DIG始终优于现有基线方法,即使将输入帧数扩展至256帧时,仍能稳健提升大规模多模态模型的性能。
English
The application of Large Multimodal Models (LMMs) to long-form video understanding is constrained by limited context lengths and the computationally prohibitive cost of processing dense video tokens. Consequently, recent research has focused on query-aware frame selection, methods that often incur significant computational overhead. This paper challenges the assumption that such complex search mechanisms are universally necessary. We first identify and validate a query typology distinguishing between global query and localized query. We demonstrate that while uniform sampling is both effective and efficient for global queries, localized queries indeed necessitate query-aware selection for optimal performance. Building on this insight, we propose DIG, a training-free frame selection framework that adapts its strategy based on the query type. Specifically,DIG employs efficient uniform sampling for global queries while activating a specialized pipeline to extract query-relevant frames for localized queries. Experiments on three long-form video understanding benchmarks demonstrate that DIG consistently outperforms existing baselines and robustly improves LMM performance, even when scaling the input frame count to 256.
PDF11December 5, 2025