ChatPaper.aiChatPaper

EfficientVMamba:轻量级视觉 Mamba 的 Atrous 选择扫描

EfficientVMamba: Atrous Selective Scan for Light Weight Visual Mamba

March 15, 2024
作者: Xiaohuan Pei, Tao Huang, Chang Xu
cs.AI

摘要

在轻量级模型开发方面的先前工作主要集中在卷积神经网络(CNN)和基于Transformer的设计,但面临持久挑战。CNN擅长局部特征提取,但会牺牲分辨率,而Transformer提供全局范围,但会增加计算需求(O(N^2))。准确性和效率之间的这种持续权衡仍然是一个重要障碍。最近,状态空间模型(SSMs),如Mamba,在语言建模和计算机视觉等各种任务中表现出色,并将全局信息提取的时间复杂度降低到O(N)。受此启发,本文提出探索在轻量级模型设计中视觉状态空间模型的潜力,并引入一种名为EfficientVMamba的新型高效模型变体。具体而言,我们的EfficientVMamba通过高效的跳跃采样集成了一种基于孔径的选择性扫描方法,构成了旨在利用全局和局部表征特征的构建模块。此外,我们研究了SSM块和卷积之间的集成,并引入了一个高效的视觉状态空间块,结合了额外的卷积分支,进一步提升了模型性能。实验结果表明,EfficientVMamba降低了计算复杂性,同时在各种视觉任务中取得了有竞争力的结果。例如,我们的EfficientVMamba-S在1.3G FLOPs的情况下,在ImageNet上将Vim-Ti在1.5G FLOPs的情况下的准确率大幅提高了5.6%。代码可在以下链接找到:https://github.com/TerryPei/EfficientVMamba。
English
Prior efforts in light-weight model development mainly centered on CNN and Transformer-based designs yet faced persistent challenges. CNNs adept at local feature extraction compromise resolution while Transformers offer global reach but escalate computational demands O(N^2). This ongoing trade-off between accuracy and efficiency remains a significant hurdle. Recently, state space models (SSMs), such as Mamba, have shown outstanding performance and competitiveness in various tasks such as language modeling and computer vision, while reducing the time complexity of global information extraction to O(N). Inspired by this, this work proposes to explore the potential of visual state space models in light-weight model design and introduce a novel efficient model variant dubbed EfficientVMamba. Concretely, our EfficientVMamba integrates a atrous-based selective scan approach by efficient skip sampling, constituting building blocks designed to harness both global and local representational features. Additionally, we investigate the integration between SSM blocks and convolutions, and introduce an efficient visual state space block combined with an additional convolution branch, which further elevate the model performance. Experimental results show that, EfficientVMamba scales down the computational complexity while yields competitive results across a variety of vision tasks. For example, our EfficientVMamba-S with 1.3G FLOPs improves Vim-Ti with 1.5G FLOPs by a large margin of 5.6% accuracy on ImageNet. Code is available at: https://github.com/TerryPei/EfficientVMamba.

Summary

AI-Generated Summary

PDF111December 15, 2024