MUSEG:通过时间戳感知的多片段定位增强视频时序理解
MUSEG: Reinforcing Video Temporal Understanding via Timestamp-Aware Multi-Segment Grounding
May 27, 2025
作者: Fuwen Luo, Shengfeng Lou, Chi Chen, Ziyue Wang, Chenliang Li, Weizhou Shen, Jiyue Guo, Peng Li, Ming Yan, Ji Zhang, Fei Huang, Yang Liu
cs.AI
摘要
视频时序理解对于多模态大语言模型(MLLMs)推理视频中的事件至关重要。尽管在通用视频理解领域取得了最新进展,当前的MLLMs在细粒度时序推理方面仍面临挑战。虽然近期已有研究探索利用强化学习(RL)来解决这一问题,但现有的RL方法在效果上仍显不足。本研究中,我们提出了MUSEG,一种新颖的基于RL的方法,通过引入时间戳感知的多片段定位来增强时序理解能力。MUSEG使MLLMs能够将查询与多个相关视频片段对齐,从而促进更全面的时序推理。为了促进有效学习,我们设计了一种定制的RL训练方案,采用分阶段奖励逐步引导模型实现时序定位推理。在时序定位和时间敏感视频问答任务上的大量实验表明,MUSEG显著优于现有方法,并在多样化的时序理解场景中展现出良好的泛化能力。访问我们的项目:https://github.com/THUNLP-MT/MUSEG。
English
Video temporal understanding is crucial for multimodal large language models
(MLLMs) to reason over events in videos. Despite recent advances in general
video understanding, current MLLMs still struggle with fine-grained temporal
reasoning. While reinforcement learning (RL) has been explored to address this
issue recently, existing RL approaches remain limited in effectiveness. In this
work, we propose MUSEG, a novel RL-based method that enhances temporal
understanding by introducing timestamp-aware multi-segment grounding. MUSEG
enables MLLMs to align queries with multiple relevant video segments, promoting
more comprehensive temporal reasoning. To facilitate effective learning, we
design a customized RL training recipe with phased rewards that progressively
guides the model toward temporally grounded reasoning. Extensive experiments on
temporal grounding and time-sensitive video QA tasks demonstrate that MUSEG
significantly outperforms existing methods and generalizes well across diverse
temporal understanding scenarios. View our project at
https://github.com/THUNLP-MT/MUSEG.Summary
AI-Generated Summary