ChatPaper.aiChatPaper

GPAS:通过梯度保持激活缩放加速大语言模型预训练的收敛

GPAS: Accelerating Convergence of LLM Pretraining via Gradient-Preserving Activation Scaling

June 27, 2025
作者: Tianhao Chen, Xin Xu, Zijing Liu, Pengxiang Li, Xinyuan Song, Ajay Kumar Jaiswal, Fan Zhang, Jishan Hu, Yang Wang, Hao Chen, Shizhe Diao, Shiwei Liu, Yu Li, Yin Lu, Can Yang
cs.AI

摘要

现代大型语言模型,如LLaMA、Qwen和DeepSeek系列,主要采用预层归一化(Pre-LN)Transformer架构。尽管在预训练过程中表现稳定且能扩展到大规模模型,但Pre-LN存在激活方差随层数呈指数增长的问题,导致残差路径主导子层输出,限制了深层的学习能力。为解决这一问题,我们提出了梯度保持激活缩放(GPAS),这是一种可与现有方法结合使用的简单技术。GPAS通过缩小中间激活值同时保持其梯度不变来实现。这确保了激活中的信息完整,并避免了梯度缩小带来的梯度消失问题。在从7100万到10亿参数的各种模型规模上的广泛实验表明,GPAS实现了持续的性能提升。除了增强Pre-LN Transformer外,GPAS在改进其他架构如Sandwich-LN和DeepNorm方面也展现出潜力,证明了其多功能性及在广泛场景中优化训练动态的潜力。
English
Modern Large Language Models, such as the LLaMA, Qwen and DeepSeek series, predominantly adopt the Pre-LayerNorm (Pre-LN) Transformer architecture. While being stable during pretraining and scalable to large model sizes, Pre-LN suffers from an exponential growth in activation variance across layers, causing the residual path to dominate over sub-layer outputs and limiting the learning capacity of deeper layers. To mitigate this issue, we propose Gradient-Preserving Activation Scaling (GPAS), a simple technique that can be used in combination with existing approaches. GPAS works by scaling down the intermediate activations while keeping their gradients unchanged. This leaves information in the activations intact, and avoids the gradient vanishing problem associated with gradient downscaling. Extensive experiments across various model sizes from 71M to 1B show that GPAS achieves consistent performance gains. Beyond enhancing Pre-LN Transformers, GPAS also shows promise in improving alternative architectures such as Sandwich-LN and DeepNorm, demonstrating its versatility and potential for improving training dynamics in a wide range of settings.
PDF21June 30, 2025