卓越的预训练优化器及其寻踪之道
Fantastic Pretraining Optimizers and Where to Find Them
September 2, 2025
作者: Kaiyue Wen, David Hall, Tengyu Ma, Percy Liang
cs.AI
摘要
AdamW 長期以來一直是語言模型預訓練中的主導優化器,儘管有眾多聲稱替代優化器能提供 1.4 到 2 倍加速的說法。我們認為,兩個方法上的缺陷掩蓋了公平的比較並阻礙了實際應用:(i) 不均衡的超參數調優和 (ii) 有限或誤導性的評估設置。為解決這兩個問題,我們對十種深度學習優化器進行了系統性研究,涵蓋四種模型規模(0.1B-1.2B 參數)和數據與模型比例(1-8 倍於 Chinchilla 最優值)。我們發現,公平且具信息量的比較需要嚴格的超參數調優和跨多種模型規模及數據與模型比例的評估,並在訓練結束時進行。首先,一個優化器的最佳超參數可能對另一個優化器是次優的,使得盲目的超參數轉移不公平。其次,許多提出的優化器相對於良好調優的基線的實際加速低於聲稱值,並隨模型規模增大而降低,對於 1.2B 參數的模型僅為 1.1 倍。第三,在達到目標訓練預算之前比較中間檢查點可能具有誤導性,因為兩個優化器之間的排名可能因學習率衰減而在訓練過程中翻轉。通過我們的深入調查,我們發現所有最快的優化器,如 Muon 和 Soap,都使用矩陣作為預條件子——將梯度與矩陣相乘而非逐元素縮放。然而,基於矩陣的優化器的加速與模型規模成反比,從 0.1B 參數模型相對於 AdamW 的 1.4 倍加速降至 1.2B 參數模型的僅 1.1 倍加速。
English
AdamW has long been the dominant optimizer in language model pretraining,
despite numerous claims that alternative optimizers offer 1.4 to 2x speedup. We
posit that two methodological shortcomings have obscured fair comparisons and
hindered practical adoption: (i) unequal hyperparameter tuning and (ii) limited
or misleading evaluation setups. To address these two issues, we conduct a
systematic study of ten deep learning optimizers across four model scales
(0.1B-1.2B parameters) and data-to-model ratios (1-8x the Chinchilla optimum).
We find that fair and informative comparisons require rigorous hyperparameter
tuning and evaluations across a range of model scales and data-to-model ratios,
performed at the end of training. First, optimal hyperparameters for one
optimizer may be suboptimal for another, making blind hyperparameter transfer
unfair. Second, the actual speedup of many proposed optimizers over well-tuned
baselines is lower than claimed and decreases with model size to only 1.1x for
1.2B parameter models. Thirdly, comparing intermediate checkpoints before
reaching the target training budgets can be misleading, as rankings between two
optimizers can flip during training due to learning rate decay. Through our
thorough investigation, we find that all the fastest optimizers such as Muon
and Soap, use matrices as preconditioners -- multiplying gradients with
matrices rather than entry-wise scalars. However, the speedup of matrix-based
optimizers is inversely proportional to model scale, decreasing from 1.4x over
AdamW for 0.1B parameter models to merely 1.1x for 1.2B parameter models.