視頻多輪推理強化模型:面向長視頻理解的增強型多輪推理機制
Video-MTR: Reinforced Multi-Turn Reasoning for Long Video Understanding
August 28, 2025
作者: Yuan Xie, Tianshui Chen, Zheng Ge, Lionel Ni
cs.AI
摘要
長視頻理解,以其長時序依賴性和多事件特性為特徵,仍是一大挑戰。現有方法多依賴於靜態推理或外部視覺語言模型(VLMs),這些方法因缺乏端到端訓練而面臨複雜性和性能欠佳的問題。本文提出Video-MTR,一種強化多輪推理框架,旨在實現迭代關鍵視頻片段選擇與問題理解。與傳統視頻推理管道一次性生成預測不同,Video-MTR進行多輪推理,基於對已處理片段及當前問題的逐步深入理解,逐步選取視頻片段。此迭代過程使得視頻分析更為精細且上下文感知。為確保中間推理過程,我們引入了一種新穎的門控雙層獎勵系統,結合基於答案正確性的軌跡級獎勵和強調幀-查詢相關性的輪次級獎勵。該系統優化了視頻片段選擇與問題理解,無需外部VLMs,實現了端到端訓練。在VideoMME、MLVU及EgoSchema等基準上的廣泛實驗表明,Video-MTR在準確性和效率上均超越現有方法,推動了長視頻理解領域的技術前沿。
English
Long-form video understanding, characterized by long-range temporal
dependencies and multiple events, remains a challenge. Existing methods often
rely on static reasoning or external visual-language models (VLMs), which face
issues like complexity and sub-optimal performance due to the lack of
end-to-end training. In this paper, we propose Video-MTR, a reinforced
multi-turn reasoning framework designed to enable iterative key video segment
selection and question comprehension. Unlike traditional video reasoning
pipeline, which generate predictions in a single turn, Video-MTR performs
reasoning in multiple turns, selecting video segments progressively based on
the evolving understanding of previously processed segments and the current
question. This iterative process allows for a more refined and contextually
aware analysis of the video. To ensure intermediate reasoning process, we
introduce a novel gated bi-level reward system, combining trajectory-level
rewards based on answer correctness and turn-level rewards emphasizing
frame-query relevance. This system optimizes both video segment selection and
question comprehension, eliminating the need for external VLMs and allowing
end-to-end training. Extensive experiments on benchmarks like VideoMME, MLVU,
and EgoSchema demonstrate that Video-MTR outperforms existing methods in both
accuracy and efficiency, advancing the state-of-the-art in long video
understanding.