视频曼巴套件:状态空间模型作为视频理解的多功能替代方案
Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding
March 14, 2024
作者: Guo Chen, Yifei Huang, Jilan Xu, Baoqi Pei, Zhe Chen, Zhiqi Li, Jiahao Wang, Kunchang Li, Tong Lu, Limin Wang
cs.AI
摘要
理解视频是计算机视觉研究中的基本方向之一,人们付出了大量努力来探索各种架构,如RNN、3D CNN和Transformers。新提出的状态空间模型架构,例如Mamba,展现出将其在长序列建模成功延伸到视频建模的有希望特征。为了评估Mamba在视频理解领域是否可以成为Transformers的可行替代方案,在这项工作中,我们进行了一系列全面的研究,探究Mamba在建模视频中可以扮演的不同角色,同时调查Mamba可能展现出优势的各种任务。我们将Mamba分为四种建模视频的角色,推导出由14个模型/模块组成的Video Mamba Suite,并在12个视频理解任务上对其进行评估。我们广泛的实验揭示了Mamba在仅视频和视频-语言任务上的巨大潜力,同时展示了有希望的效率-性能折衷。我们希望这项工作能为未来关于视频理解的研究提供宝贵的数据点和见解。代码公开:https://github.com/OpenGVLab/video-mamba-suite。
English
Understanding videos is one of the fundamental directions in computer vision
research, with extensive efforts dedicated to exploring various architectures
such as RNN, 3D CNN, and Transformers. The newly proposed architecture of state
space model, e.g., Mamba, shows promising traits to extend its success in long
sequence modeling to video modeling. To assess whether Mamba can be a viable
alternative to Transformers in the video understanding domain, in this work, we
conduct a comprehensive set of studies, probing different roles Mamba can play
in modeling videos, while investigating diverse tasks where Mamba could exhibit
superiority. We categorize Mamba into four roles for modeling videos, deriving
a Video Mamba Suite composed of 14 models/modules, and evaluating them on 12
video understanding tasks. Our extensive experiments reveal the strong
potential of Mamba on both video-only and video-language tasks while showing
promising efficiency-performance trade-offs. We hope this work could provide
valuable data points and insights for future research on video understanding.
Code is public: https://github.com/OpenGVLab/video-mamba-suite.Summary
AI-Generated Summary