ChatPaper.aiChatPaper

大型語言模型預訓練中的權重重新縮放方差控制

Variance Control via Weight Rescaling in LLM Pre-training

March 21, 2025
作者: Louis Owen, Abhay Kumar, Nilabhra Roy Chowdhury, Fabian Güra
cs.AI

摘要

大型語言模型(LLM)預訓練的成果,在很大程度上取決於權重初始化與方差控制策略。儘管初始方差控制在神經網絡中的重要性已廣為人知,但針對LLM預訓練期間的初始化及其增長管理的文獻卻相對稀少。本文提出了層索引重縮放(LIR)權重初始化方案,以及目標方差重縮放(TVR)方差控制策略。在一個擁有10億參數的LLaMA模型上的實驗表明,利用這些技術進行更優的方差管理,能顯著提升下游任務的表現(在常見的預訓練基準上最高提升4.6%),並減少極端激活值,從而緩解量化與低精度訓練相關的挑戰。我們的代碼已公開於:https://github.com/bluorion-com/weight_rescaling。
English
The outcome of Large Language Model (LLM) pre-training strongly depends on weight initialization and variance control strategies. Although the importance of initial variance control has been well documented in neural networks in general, the literature on initialization and management of its growth during LLM pre-training, specifically, is somewhat sparse. In this paper, we introduce the Layer Index Rescaling (LIR) weight initialization scheme, and the Target Variance Rescaling (TVR) variance control strategy. Experiments on a 1B parameter LLaMA model demonstrate that better variance management using these techniques yields substantial improvements in downstream task performance (up to 4.6% on common pre-training benchmarks) and reduces extreme activation values, thus mitigating challenges associated with quantization and low-precision training. Our code is available at: https://github.com/bluorion-com/weight_rescaling.

Summary

AI-Generated Summary

PDF52March 25, 2025