ChatPaper.aiChatPaper

长视频智能体:基于多智能体推理的长视频理解系统

LongVideoAgent: Multi-Agent Reasoning with Long Videos

December 23, 2025
作者: Runtao Liu, Ziyi Liu, Jiaqi Tang, Yue Ma, Renjie Pi, Jipeng Zhang, Qifeng Chen
cs.AI

摘要

近期,多模态大语言模型及利用工具进行长视频问答的系统取得显著进展,展现出对小时级视频内容进行推理的潜力。然而,现有方法仍多将内容压缩为有损摘要或依赖有限工具集,导致时序定位能力弱化并丢失细粒度线索。我们提出一种多智能体框架:主控大语言模型协调定位智能体进行问题相关片段定位,并调度视觉智能体提取目标文本观察结果。主控智能体在步数限制下进行规划,并通过强化学习训练以促进简洁、准确且高效的多智能体协作。该设计使主控智能体借助定位聚焦相关片段,用视觉细节补充字幕信息,并生成可解释的推理轨迹。在我们基于TVQA/TVQA+构建的剧集级数据集LongTVQA与LongTVQA+上,多智能体系统显著优于强非智能体基线。实验还表明强化学习能进一步强化已训练智能体的推理与规划能力。代码与数据将在https://longvideoagent.github.io/共享。
English
Recent advances in multimodal LLMs and systems that use tools for long-video QA point to the promise of reasoning over hour-long episodes. However, many methods still compress content into lossy summaries or rely on limited toolsets, weakening temporal grounding and missing fine-grained cues. We propose a multi-agent framework in which a master LLM coordinates a grounding agent to localize question-relevant segments and a vision agent to extract targeted textual observations. The master agent plans with a step limit, and is trained with reinforcement learning to encourage concise, correct, and efficient multi-agent cooperation. This design helps the master agent focus on relevant clips via grounding, complements subtitles with visual detail, and yields interpretable trajectories. On our proposed LongTVQA and LongTVQA+ which are episode-level datasets aggregated from TVQA/TVQA+, our multi-agent system significantly outperforms strong non-agent baselines. Experiments also show reinforcement learning further strengthens reasoning and planning for the trained agent. Code and data will be shared at https://longvideoagent.github.io/.
PDF381December 25, 2025