ChatPaper.aiChatPaper

Megalodon:具有無限上下文長度的高效LLM預訓練和推論

Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length

April 12, 2024
作者: Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou
cs.AI

摘要

Transformers 的二次複雜度和弱長度外推限制了它們在長序列上擴展的能力,儘管存在線性注意力和狀態空間模型等次二次解決方案,但在預訓練效率和下游任務準確性方面,它們在實踐中表現不佳於 Transformers。我們引入了Megalodon,這是一種用於高效序列建模的神經架構,具有無限上下文長度。Megalodon 繼承了 Mega 的架構(帶有閘控注意力的指數移動平均),並進一步引入多個技術組件來提高其能力和穩定性,包括複雜指數移動平均(CEMA)、時間步長規範化層、規範化注意機制和帶有雙跳殘差配置的預規範。在與 Llama2 的對照比較中,Megalodon 在擁有 70 億參數和 2 萬億訓練標記的規模上比 Transformer 實現了更好的效率。Megalodon 達到了 1.70 的訓練損失,在 Llama2-7B(1.75)和 13B(1.67)之間。代碼:https://github.com/XuezheMax/megalodon
English
The quadratic complexity and weak length extrapolation of Transformers limits their ability to scale to long sequences, and while sub-quadratic solutions like linear attention and state space models exist, they empirically underperform Transformers in pretraining efficiency and downstream task accuracy. We introduce Megalodon, a neural architecture for efficient sequence modeling with unlimited context length. Megalodon inherits the architecture of Mega (exponential moving average with gated attention), and further introduces multiple technical components to improve its capability and stability, including complex exponential moving average (CEMA), timestep normalization layer, normalized attention mechanism and pre-norm with two-hop residual configuration. In a controlled head-to-head comparison with Llama2, Megalodon achieves better efficiency than Transformer in the scale of 7 billion parameters and 2 trillion training tokens. Megalodon reaches a training loss of 1.70, landing mid-way between Llama2-7B (1.75) and 13B (1.67). Code: https://github.com/XuezheMax/megalodon

Summary

AI-Generated Summary

PDF682December 15, 2024