ChatPaper.aiChatPaper

以不同方式堆叠更多层:通过低秩更新进行高秩训练

Stack More Layers Differently: High-Rank Training Through Low-Rank Updates

July 11, 2023
作者: Vladislav Lialin, Namrata Shivagunde, Sherin Muckatira, Anna Rumshisky
cs.AI

摘要

尽管通过扩展规模来实现大型网络,参数数量达到数百亿并取得了有效性,但对于训练超参数化模型的必要性仍知之甚少,而替代方法并不一定能降低训练高性能模型的成本。在本文中,我们探讨了低秩训练技术作为训练大型神经网络的替代方法。我们引入了一种名为ReLoRA的新方法,利用低秩更新来训练高秩网络。我们将ReLoRA应用于具有高达3.5亿参数的预训练Transformer语言模型,并展示了与常规神经网络训练相当的性能。此外,我们观察到ReLoRA的效率随着模型规模的增加而提高,使其成为高效训练数十亿参数网络的一种有前景的方法。我们的研究结果揭示了低秩训练技术的潜力及其对规模定律的影响。
English
Despite the dominance and effectiveness of scaling, resulting in large networks with hundreds of billions of parameters, the necessity to train overparametrized models remains poorly understood, and alternative approaches do not necessarily make it cheaper to train high-performance models. In this paper, we explore low-rank training techniques as an alternative approach to training large neural networks. We introduce a novel method called ReLoRA, which utilizes low-rank updates to train high-rank networks. We apply ReLoRA to pre-training transformer language models with up to 350M parameters and demonstrate comparable performance to regular neural network training. Furthermore, we observe that the efficiency of ReLoRA increases with model size, making it a promising approach for training multi-billion-parameter networks efficiently. Our findings shed light on the potential of low-rank training techniques and their implications for scaling laws.
PDF230December 15, 2024