ChatPaper.aiChatPaper

凸優化理論與大型模型訓練的學習率調度之間的驚人一致性

The Surprising Agreement Between Convex Optimization Theory and Learning-Rate Scheduling for Large Model Training

January 31, 2025
作者: Fabian Schaipp, Alexander Hägele, Adrien Taylor, Umut Simsekli, Francis Bach
cs.AI

摘要

我們展示了大型模型訓練的學習率時間表行為與非光滑凸優化理論的性能界限驚人地相似。我們提供了一個常數時間表與線性冷卻的界限;特別是,由於缺乏對數項,冷卻的實際好處反映在這個界限中。此外,我們展示了優化理論與實踐之間這種驚人的密切匹配可以用於學習率調整:通過(i)擴展時間表以進行持續訓練並使用最佳學習率,以及(ii)在不同時間表之間轉移最佳學習率,我們實現了對於訓練124M和210M類Llama模型的顯著改進。
English
We show that learning-rate schedules for large model training behave surprisingly similar to a performance bound from non-smooth convex optimization theory. We provide a bound for the constant schedule with linear cooldown; in particular, the practical benefit of cooldown is reflected in the bound due to the absence of logarithmic terms. Further, we show that this surprisingly close match between optimization theory and practice can be exploited for learning-rate tuning: we achieve noticeable improvements for training 124M and 210M Llama-type models by (i) extending the schedule for continued training with optimal learning-rate, and (ii) transferring the optimal learning-rate across schedules.

Summary

AI-Generated Summary

PDF73February 3, 2025