GPAS:通過梯度保持激活縮放加速大語言模型預訓練的收斂
GPAS: Accelerating Convergence of LLM Pretraining via Gradient-Preserving Activation Scaling
June 27, 2025
作者: Tianhao Chen, Xin Xu, Zijing Liu, Pengxiang Li, Xinyuan Song, Ajay Kumar Jaiswal, Fan Zhang, Jishan Hu, Yang Wang, Hao Chen, Shizhe Diao, Shiwei Liu, Yu Li, Yin Lu, Can Yang
cs.AI
摘要
现代大型语言模型,如LLaMA、Qwen及DeepSeek系列,主要采用预层归一化(Pre-LN)Transformer架构。尽管在预训练过程中表现稳定且能扩展至大规模模型,Pre-LN却面临激活方差随层数呈指数级增长的问题,导致残差路径主导子层输出,限制了深层网络的学习能力。为缓解此问题,我们提出了梯度保持激活缩放(GPAS),这一简便技术可与现有方法结合使用。GPAS通过缩小中间激活值同时保持其梯度不变来实现,既保留了激活中的信息,又避免了梯度缩小带来的梯度消失问题。在从7100万到10亿参数规模不等的多种模型上进行的广泛实验表明,GPAS实现了持续的性能提升。除了增强Pre-LN Transformer外,GPAS在改进如Sandwich-LN和DeepNorm等替代架构方面也展现出潜力,证明了其在多种设置下优化训练动态的通用性和潜力。
English
Modern Large Language Models, such as the LLaMA, Qwen and DeepSeek series,
predominantly adopt the Pre-LayerNorm (Pre-LN) Transformer architecture. While
being stable during pretraining and scalable to large model sizes, Pre-LN
suffers from an exponential growth in activation variance across layers,
causing the residual path to dominate over sub-layer outputs and limiting the
learning capacity of deeper layers. To mitigate this issue, we propose
Gradient-Preserving Activation Scaling (GPAS), a simple technique that can be
used in combination with existing approaches. GPAS works by scaling down the
intermediate activations while keeping their gradients unchanged. This leaves
information in the activations intact, and avoids the gradient vanishing
problem associated with gradient downscaling. Extensive experiments across
various model sizes from 71M to 1B show that GPAS achieves consistent
performance gains. Beyond enhancing Pre-LN Transformers, GPAS also shows
promise in improving alternative architectures such as Sandwich-LN and
DeepNorm, demonstrating its versatility and potential for improving training
dynamics in a wide range of settings.