VideoMamba:用于高效视频理解的状态空间模型
VideoMamba: State Space Model for Efficient Video Understanding
March 11, 2024
作者: Kunchang Li, Xinhao Li, Yi Wang, Yinan He, Yali Wang, Limin Wang, Yu Qiao
cs.AI
摘要
为了解决视频理解中的本地冗余和全局依赖的双重挑战,本研究创新地将Mamba技术应用于视频领域。提出的VideoMamba克服了现有的3D卷积神经网络和视频变换器的局限性。其线性复杂度算子实现了高效的长期建模,这对于高分辨率长视频理解至关重要。广泛的评估揭示了VideoMamba的四个核心能力:(1)在视觉领域的可扩展性,无需进行大量数据集预训练,这要归功于一种新颖的自蒸馏技术;(2)对于识别即使存在细微运动差异的短期动作具有敏感性;(3)在长期视频理解方面具有优越性,展示了相对于传统基于特征的模型的重大进展;以及(4)与其他模态的兼容性,展示了在多模态环境中的稳健性。通过这些独特优势,VideoMamba为视频理解设立了新的基准,为全面视频理解提供了可扩展和高效的解决方案。所有代码和模型均可在https://github.com/OpenGVLab/VideoMamba获取。
English
Addressing the dual challenges of local redundancy and global dependencies in
video understanding, this work innovatively adapts the Mamba to the video
domain. The proposed VideoMamba overcomes the limitations of existing 3D
convolution neural networks and video transformers. Its linear-complexity
operator enables efficient long-term modeling, which is crucial for
high-resolution long video understanding. Extensive evaluations reveal
VideoMamba's four core abilities: (1) Scalability in the visual domain without
extensive dataset pretraining, thanks to a novel self-distillation technique;
(2) Sensitivity for recognizing short-term actions even with fine-grained
motion differences; (3) Superiority in long-term video understanding,
showcasing significant advancements over traditional feature-based models; and
(4) Compatibility with other modalities, demonstrating robustness in
multi-modal contexts. Through these distinct advantages, VideoMamba sets a new
benchmark for video understanding, offering a scalable and efficient solution
for comprehensive video understanding. All the code and models are available at
https://github.com/OpenGVLab/VideoMamba.Summary
AI-Generated Summary