ChatPaper.aiChatPaper

SiMBA:用于视觉和多变量时间序列的简化曼巴架构

SiMBA: Simplified Mamba-Based Architecture for Vision and Multivariate Time series

March 22, 2024
作者: Badri N. Patro, Vijay S. Agneeswaran
cs.AI

摘要

Transformer已广泛采用注意力网络进行序列混合和MLPs进行通道混合,在各个领域取得突破性进展中发挥着关键作用。然而,最近的文献突出了注意力网络存在的问题,包括对输入序列长度的低归纳偏差和二次复杂度。像S4和其他SSMs(如Hippo、Global Convolutions、liquid S4、LRU、Mega和Mamba)这样的状态空间模型已经出现,以解决上述问题,帮助处理更长的序列长度。尽管Mamba是最先进的SSM,但在扩展到大型计算机视觉数据集的网络时存在稳定性问题。我们提出了SiMBA,这是一种新架构,通过特定的特征值计算引入Einstein FFT(EinFFT)来进行通道建模,并使用Mamba块进行序列建模。对图像和时间序列基准的广泛性能研究表明,SiMBA优于现有的SSMs,在与最先进的Transformer之间的性能差距上取得了突破。值得注意的是,SiMBA在ImageNet和转移学习基准(如Stanford Car和Flower)以及任务学习基准以及七个时间序列基准数据集上确立了自己作为新的最先进的SSM。该项目页面可在此网站上找到:https://github.com/badripatro/Simba。
English
Transformers have widely adopted attention networks for sequence mixing and MLPs for channel mixing, playing a pivotal role in achieving breakthroughs across domains. However, recent literature highlights issues with attention networks, including low inductive bias and quadratic complexity concerning input sequence length. State Space Models (SSMs) like S4 and others (Hippo, Global Convolutions, liquid S4, LRU, Mega, and Mamba), have emerged to address the above issues to help handle longer sequence lengths. Mamba, while being the state-of-the-art SSM, has a stability issue when scaled to large networks for computer vision datasets. We propose SiMBA, a new architecture that introduces Einstein FFT (EinFFT) for channel modeling by specific eigenvalue computations and uses the Mamba block for sequence modeling. Extensive performance studies across image and time-series benchmarks demonstrate that SiMBA outperforms existing SSMs, bridging the performance gap with state-of-the-art transformers. Notably, SiMBA establishes itself as the new state-of-the-art SSM on ImageNet and transfer learning benchmarks such as Stanford Car and Flower as well as task learning benchmarks as well as seven time series benchmark datasets. The project page is available on this website ~https://github.com/badripatro/Simba.

Summary

AI-Generated Summary

PDF131December 15, 2024