SkyLadder:通過上下文窗口調度實現更優更快的預訓練
SkyLadder: Better and Faster Pretraining via Context Window Scheduling
March 19, 2025
作者: Tongyao Zhu, Qian Liu, Haonan Wang, Shiqi Chen, Xiangming Gu, Tianyu Pang, Min-Yen Kan
cs.AI
摘要
近期在大型語言模型(LLM)預訓練方面的進展,特徵在於不斷擴展的上下文窗口以處理更長的序列。然而,我們的初步研究顯示,在固定的token預算下,使用較短上下文窗口預訓練的模型始終優於其長上下文對應模型。這一發現促使我們探索一種最佳的上下文窗口調度策略,以更好地平衡長上下文能力與預訓練效率。為此,我們提出了SkyLadder,這是一種簡單而有效的方法,實現了從短到長上下文窗口的過渡。SkyLadder在保持強勁的標準基準性能的同時,在長上下文任務上匹配或超越了基線結果。通過大量實驗,我們在100B tokens上預訓練了1B參數模型(最高32K上下文)和3B參數模型(8K上下文),證明SkyLadder在常見基準上帶來了一致的高達3.7%的性能提升,同時相比基線實現了高達22%的訓練速度提升。代碼位於https://github.com/sail-sg/SkyLadder。
English
Recent advancements in LLM pretraining have featured ever-expanding context
windows to process longer sequences. However, our pilot study reveals that
models pretrained with shorter context windows consistently outperform their
long-context counterparts under a fixed token budget. This finding motivates us
to explore an optimal context window scheduling strategy to better balance
long-context capability with pretraining efficiency. To this end, we propose
SkyLadder, a simple yet effective approach that implements a short-to-long
context window transition. SkyLadder preserves strong standard benchmark
performance, while matching or exceeding baseline results on long context
tasks. Through extensive experiments, we pre-train 1B-parameter models (up to
32K context) and 3B-parameter models (8K context) on 100B tokens, demonstrating
that SkyLadder yields consistent gains of up to 3.7% on common benchmarks,
while achieving up to 22% faster training speeds compared to baselines. The
code is at https://github.com/sail-sg/SkyLadder.Summary
AI-Generated Summary