ChatPaper.aiChatPaper

長影片智慧體:基於多智慧體推理的長影片理解系統

LongVideoAgent: Multi-Agent Reasoning with Long Videos

December 23, 2025
作者: Runtao Liu, Ziyi Liu, Jiaqi Tang, Yue Ma, Renjie Pi, Jipeng Zhang, Qifeng Chen
cs.AI

摘要

近期多模態大型語言模型及運用工具進行長影片問答的系統進展,展現了對時長達數小時影集進行推理的潛力。然而,現有方法仍多將內容壓縮為有損摘要或依賴有限工具集,這會削弱時間定位能力並遺漏細粒度線索。我們提出一個多智能體框架:由主控LLM協調定位智能體來標定問題相關片段,並調度視覺智能體提取目標文本觀察結果。主控智能體在步數限制下進行規劃,並通過強化學習訓練以促進簡潔、準確且高效的多智能體協作。此設計使主控智能體能透過定位專注相關片段,以視覺細節補充字幕資訊,並產生可解釋的決策軌跡。在我們基於TVQA/TVQA+彙整的影集級數據集LongTVQA與LongTVQA+上,本多智能體系統顯著優於強力非智能體基線模型。實驗同時表明,強化學習能進一步增強已訓練智能體的推理與規劃能力。程式碼與數據將於 https://longvideoagent.github.io/ 公開分享。
English
Recent advances in multimodal LLMs and systems that use tools for long-video QA point to the promise of reasoning over hour-long episodes. However, many methods still compress content into lossy summaries or rely on limited toolsets, weakening temporal grounding and missing fine-grained cues. We propose a multi-agent framework in which a master LLM coordinates a grounding agent to localize question-relevant segments and a vision agent to extract targeted textual observations. The master agent plans with a step limit, and is trained with reinforcement learning to encourage concise, correct, and efficient multi-agent cooperation. This design helps the master agent focus on relevant clips via grounding, complements subtitles with visual detail, and yields interpretable trajectories. On our proposed LongTVQA and LongTVQA+ which are episode-level datasets aggregated from TVQA/TVQA+, our multi-agent system significantly outperforms strong non-agent baselines. Experiments also show reinforcement learning further strengthens reasoning and planning for the trained agent. Code and data will be shared at https://longvideoagent.github.io/.
PDF381December 25, 2025