ChatPaper.aiChatPaper

Vision Mamba:使用双向状态空间模型进行高效视觉表示学习

Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model

January 17, 2024
作者: Lianghui Zhu, Bencheng Liao, Qian Zhang, Xinlong Wang, Wenyu Liu, Xinggang Wang
cs.AI

摘要

最近,具有高效硬件感知设计的状态空间模型(SSMs),即Mamba,展现出在长序列建模方面具有巨大潜力。纯粹基于SSMs构建高效通用的视觉骨干是一个吸引人的方向。然而,由于视觉数据的位置敏感性和对全局上下文进行视觉理解的要求,为SSMs表示视觉数据是具有挑战性的。在本文中,我们展示了视觉表示学习对自注意力的依赖并非必要,并提出了一种新的具有双向Mamba块(Vim)的通用视觉骨干,该骨干通过位置嵌入标记图像序列,并利用双向状态空间模型压缩视觉表示。在ImageNet分类、COCO目标检测和ADE20k语义分割任务中,Vim相较于DeiT等成熟的视觉Transformer表现出更高性能,同时也显著提高了计算和内存效率。例如,在1248x1248分辨率图像上进行批量推断提取特征时,Vim比DeiT快2.8倍,节省86.8%的GPU内存。结果表明,Vim能够克服执行高分辨率图像的Transformer式理解时的计算和内存约束,并有潜力成为视觉基础模型的下一代骨干。代码可在https://github.com/hustvl/Vim找到。
English
Recently the state space models (SSMs) with efficient hardware-aware designs, i.e., Mamba, have shown great potential for long sequence modeling. Building efficient and generic vision backbones purely upon SSMs is an appealing direction. However, representing visual data is challenging for SSMs due to the position-sensitivity of visual data and the requirement of global context for visual understanding. In this paper, we show that the reliance of visual representation learning on self-attention is not necessary and propose a new generic vision backbone with bidirectional Mamba blocks (Vim), which marks the image sequences with position embeddings and compresses the visual representation with bidirectional state space models. On ImageNet classification, COCO object detection, and ADE20k semantic segmentation tasks, Vim achieves higher performance compared to well-established vision transformers like DeiT, while also demonstrating significantly improved computation & memory efficiency. For example, Vim is 2.8times faster than DeiT and saves 86.8% GPU memory when performing batch inference to extract features on images with a resolution of 1248times1248. The results demonstrate that Vim is capable of overcoming the computation & memory constraints on performing Transformer-style understanding for high-resolution images and it has great potential to become the next-generation backbone for vision foundation models. Code is available at https://github.com/hustvl/Vim.
PDF623December 15, 2024