DiLoCoX:面向去中心化集群的低通信大规模训练框架
DiLoCoX: A Low-Communication Large-Scale Training Framework for Decentralized Cluster
June 26, 2025
作者: Ji Qi, WenPeng Zhu, Li Li, Ming Wu, YingJun Wu, Wu He, Xun Gao, Jason Zeng, Michael Heinrich
cs.AI
摘要
基础模型,尤其是大规模语言模型(LLMs)的分布式训练,对通信要求极高,因而高度依赖于具备快速可靠互连的集中式集群。我们能否在低速网络上进行训练,从而在处理超过1000亿参数的模型时,释放去中心化集群的潜力?本文提出DiLoCoX,一种低通信的大规模去中心化集群训练框架。它结合了流水线并行与双优化器策略、通信与本地训练的一步延迟重叠,以及自适应梯度压缩方案。这一组合显著提升了参数规模与模型预训练速度。通过收敛性的理论分析,我们论证了一步延迟重叠通信与本地训练以及自适应梯度压缩方案的优势。实验表明,DiLoCoX能够在1Gbps网络上预训练1070亿参数的基础模型。与传统的AllReduce相比,DiLoCoX在分布式训练中实现了357倍的加速,同时保持模型收敛性的可忽略下降。据我们所知,这是首个成功应用于超过1000亿参数模型的去中心化训练框架。
English
The distributed training of foundation models, particularly large language
models (LLMs), demands a high level of communication. Consequently, it is
highly dependent on a centralized cluster with fast and reliable interconnects.
Can we conduct training on slow networks and thereby unleash the power of
decentralized clusters when dealing with models exceeding 100 billion
parameters? In this paper, we propose DiLoCoX, a low-communication large-scale
decentralized cluster training framework. It combines Pipeline Parallelism with
Dual Optimizer Policy, One-Step-Delay Overlap of Communication and Local
Training, and an Adaptive Gradient Compression Scheme. This combination
significantly improves the scale of parameters and the speed of model
pre-training. We justify the benefits of one-step-delay overlap of
communication and local training, as well as the adaptive gradient compression
scheme, through a theoretical analysis of convergence. Empirically, we
demonstrate that DiLoCoX is capable of pre-training a 107B foundation model
over a 1Gbps network. Compared to vanilla AllReduce, DiLoCoX can achieve a 357x
speedup in distributed training while maintaining negligible degradation in
model convergence. To the best of our knowledge, this is the first
decentralized training framework successfully applied to models with over 100
billion parameters.