蛇母能学会如何学习吗?关于上下文学习任务的比较研究
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks
February 6, 2024
作者: Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
cs.AI
摘要
状态空间模型(SSMs),如Mamba Gu & Dao(2034)所提出的,被提议作为语言建模中替代Transformer网络的选择,通过整合门控、卷积和依赖输入的标记选择来缓解多头注意力的二次成本。尽管SSMs表现出有竞争力的性能,但与Transformer相比,它们的上下文学习(ICL)能力,即现代语言模型的一个显著新属性,使任务能够在无需参数优化的情况下执行,仍未得到充分探索。在本研究中,我们评估了SSMs的ICL性能,重点关注Mamba,在各种任务中与Transformer模型进行对比。我们的结果显示,在标准回归ICL任务中,SSMs的表现与Transformers相当,而在稀疏奇偶学习等任务中表现优于它们。然而,在涉及非标准检索功能的任务中,SSMs表现不佳。为解决这些限制,我们引入了一个混合模型,\variant,将Mamba与注意力块结合,超越了单独模型在独立困难任务中的表现。我们的发现表明,混合架构为增强语言模型中的ICL提供了有前途的途径。
English
State-space models (SSMs), such as Mamba Gu & Dao (2034), have been proposed
as alternatives to Transformer networks in language modeling, by incorporating
gating, convolutions, and input-dependent token selection to mitigate the
quadratic cost of multi-head attention. Although SSMs exhibit competitive
performance, their in-context learning (ICL) capabilities, a remarkable
emergent property of modern language models that enables task execution without
parameter optimization, remain underexplored compared to Transformers. In this
study, we evaluate the ICL performance of SSMs, focusing on Mamba, against
Transformer models across various tasks. Our results show that SSMs perform
comparably to Transformers in standard regression ICL tasks, while
outperforming them in tasks like sparse parity learning. However, SSMs fall
short in tasks involving non-standard retrieval functionality. To address these
limitations, we introduce a hybrid model, \variant, that combines Mamba with
attention blocks, surpassing individual models in tasks where they struggle
independently. Our findings suggest that hybrid architectures offer promising
avenues for enhancing ICL in language models.