大规模语言模型预训练中的模型融合
Model Merging in Pre-training of Large Language Models
May 17, 2025
作者: Yunshui Li, Yiyuan Ma, Shen Yan, Chaoyi Zhang, Jing Liu, Jianqiao Lu, Ziwen Xu, Mengzhao Chen, Minrui Wang, Shiyi Zhan, Jin Ma, Xunhao Lai, Yao Luo, Xingyan Bin, Hongbin Ren, Mingji Han, Wenhao Hao, Bairen Yi, LingJun Liu, Bole Ma, Xiaoying Jia, Zhou Xun, Liang Xiang, Yonghui Wu
cs.AI
摘要
模型融合作为一种提升大规模语言模型性能的有前景技术,尽管其在大规模预训练中的应用仍相对未被充分探索。本文深入研究了预训练过程中的模型融合技术。通过对从数百万到超过1000亿参数的密集架构和专家混合(MoE)架构进行广泛实验,我们证明了使用恒定学习率训练的检查点进行融合不仅能显著提升性能,还能准确预测退火行为。这些改进既带来了更高效的模型开发,也大幅降低了训练成本。我们对融合策略和超参数的详细消融研究为理解其内在机制提供了新见解,同时揭示了新颖的应用场景。通过全面的实验分析,我们为开源社区提供了实用的预训练指南,以实现有效的模型融合。
English
Model merging has emerged as a promising technique for enhancing large
language models, though its application in large-scale pre-training remains
relatively unexplored. In this paper, we present a comprehensive investigation
of model merging techniques during the pre-training process. Through extensive
experiments with both dense and Mixture-of-Experts (MoE) architectures ranging
from millions to over 100 billion parameters, we demonstrate that merging
checkpoints trained with constant learning rates not only achieves significant
performance improvements but also enables accurate prediction of annealing
behavior. These improvements lead to both more efficient model development and
significantly lower training costs. Our detailed ablation studies on merging
strategies and hyperparameters provide new insights into the underlying
mechanisms while uncovering novel applications. Through comprehensive
experimental analysis, we offer the open-source community practical
pre-training guidelines for effective model merging.Summary
AI-Generated Summary