ChatPaper.aiChatPaper

SkyLadder:通过上下文窗口调度实现更优更快的预训练

SkyLadder: Better and Faster Pretraining via Context Window Scheduling

March 19, 2025
作者: Tongyao Zhu, Qian Liu, Haonan Wang, Shiqi Chen, Xiangming Gu, Tianyu Pang, Min-Yen Kan
cs.AI

摘要

近期,大型语言模型(LLM)预训练领域的一大进展是不断扩展的上下文窗口,以处理更长的序列。然而,我们的初步研究表明,在固定token预算下,使用较短上下文窗口预训练的模型始终优于长上下文窗口的模型。这一发现促使我们探索一种最优的上下文窗口调度策略,以更好地平衡长上下文能力与预训练效率。为此,我们提出了SkyLadder,这是一种简单而有效的方法,实现了从短到长上下文窗口的过渡。SkyLadder在保持强劲标准基准性能的同时,在长上下文任务上匹配或超越了基线结果。通过大量实验,我们在100B token上预训练了1B参数(上下文窗口高达32K)和3B参数(8K上下文)的模型,证明SkyLadder在常见基准上带来了高达3.7%的持续增益,同时相比基线实现了高达22%的训练速度提升。代码已发布于https://github.com/sail-sg/SkyLadder。
English
Recent advancements in LLM pretraining have featured ever-expanding context windows to process longer sequences. However, our pilot study reveals that models pretrained with shorter context windows consistently outperform their long-context counterparts under a fixed token budget. This finding motivates us to explore an optimal context window scheduling strategy to better balance long-context capability with pretraining efficiency. To this end, we propose SkyLadder, a simple yet effective approach that implements a short-to-long context window transition. SkyLadder preserves strong standard benchmark performance, while matching or exceeding baseline results on long context tasks. Through extensive experiments, we pre-train 1B-parameter models (up to 32K context) and 3B-parameter models (8K context) on 100B tokens, demonstrating that SkyLadder yields consistent gains of up to 3.7% on common benchmarks, while achieving up to 22% faster training speeds compared to baselines. The code is at https://github.com/sail-sg/SkyLadder.

Summary

AI-Generated Summary

PDF112March 20, 2025