MUDDFormer:透過多路動態密集連接突破Transformer中的殘差瓶頸
MUDDFormer: Breaking Residual Bottlenecks in Transformers via Multiway Dynamic Dense Connections
February 13, 2025
作者: Da Xiao, Qingye Meng, Shengping Li, Xingyuan Yuan
cs.AI
摘要
我們提出了多路動態密集連接(MUDD),這是一種簡單而有效的方法,旨在解決殘差連接的局限性並增強Transformer中的跨層信息流。與現有的使用靜態共享連接權重的密集連接方法不同,MUDD根據每個序列位置的隱藏狀態以及Transformer塊中每個解耦輸入流(查詢、鍵、值或殘差)動態生成連接權重。MUDD連接可以無縫集成到任何Transformer架構中,形成MUDDFormer。大量實驗表明,MUDDFormer在各種模型架構和規模的語言建模任務中顯著優於傳統Transformer,達到了使用1.8倍至2.4倍計算資源訓練的Transformer的性能。值得注意的是,MUDDPythia-2.8B在預訓練困惑度和下游任務中與Pythia-6.9B相當,甚至在五樣本設置中可與Pythia-12B匹敵,而僅增加了0.23%的參數和0.4%的計算量。JAX和PyTorch的代碼及預訓練模型可在https://github.com/Caiyun-AI/MUDDFormer 獲取。
English
We propose MUltiway Dynamic Dense (MUDD) connections, a simple yet effective
method to address the limitations of residual connections and enhance
cross-layer information flow in Transformers. Unlike existing dense connection
approaches with static and shared connection weights, MUDD generates connection
weights dynamically depending on hidden states at each sequence position and
for each decoupled input stream (the query, key, value or residual) of a
Transformer block. MUDD connections can be seamlessly integrated into any
Transformer architecture to create MUDDFormer. Extensive experiments show that
MUDDFormer significantly outperforms Transformers across various model
architectures and scales in language modeling, achieving the performance of
Transformers trained with 1.8X-2.4X compute. Notably, MUDDPythia-2.8B matches
Pythia-6.9B in pretraining ppl and downstream tasks and even rivals Pythia-12B
in five-shot settings, while adding only 0.23% parameters and 0.4% computation.
Code in JAX and PyTorch and pre-trained models are available at
https://github.com/Caiyun-AI/MUDDFormer .