迈向多模态终身理解:数据集与智能体基线构建
Towards Multimodal Lifelong Understanding: A Dataset and Agentic Baseline
March 5, 2026
作者: Guo Chen, Lidong Lu, Yicheng Liu, Liangrui Dong, Lidong Zou, Jixin Lv, Zhenquan Li, Xinyi Mao, Baoqi Pei, Shihao Wang, Zhiqi Li, Karan Sapra, Fuxiao Liu, Yin-Dong Zheng, Yifei Huang, Limin Wang, Zhiding Yu, Andrew Tao, Guilin Liu, Tong Lu
cs.AI
摘要
尽管当前视频理解数据集已扩展至小时级时长,但这些数据集通常由密集拼接的片段构成,与自然状态下的非剧本化日常生活存在差异。为弥补这一差距,我们推出了MM-Lifelong数据集,专为多模态终身理解任务而设计。该数据集包含181.1小时影像素材,按日、周、月三级时间尺度构建以捕捉不同时间密度下的特征。大量评估揭示了现有范式的两大关键缺陷:端到端多模态大模型因上下文饱和而遭遇工作记忆瓶颈,而代表性智能体基线在处理稀疏的月尺度时间线时会出现全局定位崩溃。针对此问题,我们提出递归多模态智能体(ReMA),通过动态记忆管理迭代更新递归信念状态,显著超越现有方法。最后,我们建立了可分离时间偏差与领域偏差的数据集划分方案,为监督学习与分布外泛化的后续研究奠定严谨基础。
English
While datasets for video understanding have scaled to hour-long durations, they typically consist of densely concatenated clips that differ from natural, unscripted daily life. To bridge this gap, we introduce MM-Lifelong, a dataset designed for Multimodal Lifelong Understanding. Comprising 181.1 hours of footage, it is structured across Day, Week, and Month scales to capture varying temporal densities. Extensive evaluations reveal two critical failure modes in current paradigms: end-to-end MLLMs suffer from a Working Memory Bottleneck due to context saturation, while representative agentic baselines experience Global Localization Collapse when navigating sparse, month-long timelines. To address this, we propose the Recursive Multimodal Agent (ReMA), which employs dynamic memory management to iteratively update a recursive belief state, significantly outperforming existing methods. Finally, we establish dataset splits designed to isolate temporal and domain biases, providing a rigorous foundation for future research in supervised learning and out-of-distribution generalization.