ChatPaper.aiChatPaper

在DiLoCo中實現重疊通信與計算的急切更新

Eager Updates For Overlapped Communication and Computation in DiLoCo

February 18, 2025
作者: Satyen Kale, Arthur Douillard, Yanislav Donchev
cs.AI

摘要

分佈式優化方法,如DiLoCo,已被證明在跨多個分佈式工作節點(例如數據中心)訓練超大模型方面具有顯著效果。這些方法將更新過程分為兩部分:內部優化階段,在此階段各工作節點獨立地對其本地數據執行多次優化步驟;以及外部優化步驟,在此步驟中同步內部更新。雖然此類方法相比標準的數據並行訓練大幅減少了通信需求,但在工作節點為數據中心的場景下,即便這些方法有限的通信需求,由於每次外部優化步驟所需的阻塞,仍可能導致顯著的性能下降。本文探討了通過重疊通信與計算來緩解這一問題的技術,使得外部優化步驟能夠完全與內部優化階段重疊。我們展示了一種名為“急切更新”的特定變體,在工作者間帶寬較低的環境下,其性能可與標準DiLoCo相媲美。
English
Distributed optimization methods such as DiLoCo have been shown to be effective in training very large models across multiple distributed workers, such as datacenters. These methods split updates into two parts: an inner optimization phase, where the workers independently execute multiple optimization steps on their own local data, and an outer optimization step, where the inner updates are synchronized. While such approaches require orders of magnitude less communication than standard data-parallel training, in settings where the workers are datacenters, even the limited communication requirements of these approaches can still cause significant slow downs due to the blocking necessary at each outer optimization step. In this paper, we investigate techniques to mitigate this issue by overlapping communication with computation in a manner that allows the outer optimization step to fully overlap with the inner optimization phase. We show that a particular variant, dubbed eager updates, provides competitive performance with standard DiLoCo in settings with low bandwidth between workers.

Summary

AI-Generated Summary

PDF72February 19, 2025