2BP:2階段反向傳播
2BP: 2-Stage Backpropagation
May 28, 2024
作者: Christopher Rae, Joseph K. L. Lee, James Richings
cs.AI
摘要
隨著深度神經網絡(DNNs)的規模和複雜性不斷增長,往往超出單個加速器的內存容量,需要將模型參數分片到多個加速器上。管道並行是訓練大型DNNs常用的分片策略。然而,目前管道並行的實現卻因機器學習框架提供的自動微分工具而意外形成瓶頸。本文介紹了2階段反向傳播(2BP)。通過將反向傳播步驟分為兩個獨立階段,我們可以減少閒置計算時間。我們在各種模型架構和管道排程上測試了2BP,在所有情況下均實現了吞吐量的增加。使用2BP,我們在訓練具有70億參數的類LLaMa變壓器時,跨4個GPU實現了與傳統方法相比吞吐量增加了1.70倍的效果。
English
As Deep Neural Networks (DNNs) grow in size and complexity, they often exceed
the memory capacity of a single accelerator, necessitating the sharding of
model parameters across multiple accelerators. Pipeline parallelism is a
commonly used sharding strategy for training large DNNs. However, current
implementations of pipeline parallelism are being unintentionally bottlenecked
by the automatic differentiation tools provided by ML frameworks. This paper
introduces 2-stage backpropagation (2BP). By splitting the backward propagation
step into two separate stages, we can reduce idle compute time. We tested 2BP
on various model architectures and pipelining schedules, achieving increases in
throughput in all cases. Using 2BP, we were able to achieve a 1.70x increase in
throughput compared to traditional methods when training a LLaMa-like
transformer with 7 billion parameters across 4 GPUs.Summary
AI-Generated Summary