狀態偏移調校:基於狀態的參數高效微調方法應用於狀態空間模型
State-offset Tuning: State-based Parameter-Efficient Fine-Tuning for State Space Models
March 5, 2025
作者: Wonjun Kang, Kevin Galim, Yuchen Zeng, Minjae Lee, Hyung Il Koo, Nam Ik Cho
cs.AI
摘要
狀態空間模型(SSMs)已成為Transformer的高效替代方案,有效緩解了其二次方計算成本的問題。然而,參數高效微調(PEFT)方法在SSMs上的應用仍鮮有探索。特別是,在Transformer中廣泛使用的基於提示的方法,如提示調優(Prompt Tuning)和前綴調優(Prefix-Tuning),在SSMs上表現不佳。為此,我們提出了基於狀態的方法作為基於提示方法的優越替代方案。這一新方法家族自然源於SSMs的架構特性。基於狀態的方法直接調整與狀態相關的特徵,而非依賴外部提示。此外,我們引入了一種新穎的基於狀態的PEFT方法:狀態偏移調優(State-offset Tuning)。在每個時間步,我們的方法直接影響當前步驟的狀態,從而實現更有效的適應。通過在多樣化數據集上的廣泛實驗,我們證明了該方法的有效性。代碼可在https://github.com/furiosa-ai/ssm-state-tuning獲取。
English
State Space Models (SSMs) have emerged as efficient alternatives to
Transformers, mitigating their quadratic computational cost. However, the
application of Parameter-Efficient Fine-Tuning (PEFT) methods to SSMs remains
largely unexplored. In particular, prompt-based methods like Prompt Tuning and
Prefix-Tuning, which are widely used in Transformers, do not perform well on
SSMs. To address this, we propose state-based methods as a superior alternative
to prompt-based methods. This new family of methods naturally stems from the
architectural characteristics of SSMs. State-based methods adjust state-related
features directly instead of depending on external prompts. Furthermore, we
introduce a novel state-based PEFT method: State-offset Tuning. At every
timestep, our method directly affects the state at the current step, leading to
more effective adaptation. Through extensive experiments across diverse
datasets, we demonstrate the effectiveness of our method. Code is available at
https://github.com/furiosa-ai/ssm-state-tuning.Summary
AI-Generated Summary