ChatPaper.aiChatPaper

ReFoCUS:基于强化学习的上下文理解帧优化框架

ReFoCUS: Reinforcement-guided Frame Optimization for Contextual Understanding

June 2, 2025
作者: Hosu Lee, Junho Kim, Hyunjun Kim, Yong Man Ro
cs.AI

摘要

近期,大型多模态模型(LMMs)的进展已有效推动了视觉-语言推理能力的发展,然而,对视频内容的理解仍受限于次优的帧选择策略。现有方法多依赖静态启发式规则或外部检索模块来为视频-LLMs提供帧信息,这可能导致无法准确捕捉查询相关信息。为此,我们提出了ReFoCUS(基于强化学习的上下文理解帧优化框架),这是一种新颖的帧级策略优化框架,它将优化目标从文本响应转向视觉输入选择。ReFoCUS通过强化学习学习帧选择策略,利用源自参考LMM的奖励信号,反映模型对最能支持时间基础响应的帧的内在偏好。为了高效探索庞大的组合帧空间,我们采用了自回归的条件选择架构,确保时间连贯性的同时降低复杂度。我们的方法无需帧级别的显式监督,并在多个视频问答基准测试中持续提升推理性能,凸显了将帧选择与模型内部效用对齐的优势。
English
Recent progress in Large Multi-modal Models (LMMs) has enabled effective vision-language reasoning, yet the ability to understand video content remains constrained by suboptimal frame selection strategies. Existing approaches often rely on static heuristics or external retrieval modules to feed frame information into video-LLMs, which may fail to provide the query-relevant information. In this work, we introduce ReFoCUS (Reinforcement-guided Frame Optimization for Contextual UnderStanding), a novel frame-level policy optimization framework that shifts the optimization target from textual responses to visual input selection. ReFoCUS learns a frame selection policy via reinforcement learning, using reward signals derived from a reference LMM to reflect the model's intrinsic preferences for frames that best support temporally grounded responses. To efficiently explore the large combinatorial frame space, we employ an autoregressive, conditional selection architecture that ensures temporal coherence while reducing complexity. Our approach does not require explicit supervision at the frame-level and consistently improves reasoning performance across multiple video QA benchmarks, highlighting the benefits of aligning frame selection with model-internal utility.
PDF32June 4, 2025