ChatPaper.aiChatPaper

MegaTrain:在单GPU上实现千亿参数大语言模型的完整精度训练

MegaTrain: Full Precision Training of 100B+ Parameter Large Language Models on a Single GPU

April 6, 2026
作者: Zhengqing Yuan, Hanchi Sun, Lichao Sun, Yanfang Ye
cs.AI

摘要

我们推出MegaTrain——一种以内存为中心的系统,可在单GPU上以全精度高效训练参数量超过1000亿的大语言模型。与传统以GPU为中心的系统不同,MegaTrain将参数和优化器状态存储在主机内存(CPU内存)中,将GPU视为瞬时计算引擎。针对每个网络层,我们采用流式传输参数输入并计算梯度输出的方式,最大限度减少设备上的持久状态。为突破CPU-GPU带宽瓶颈,我们采用两项关键优化技术:1)引入流水线双缓冲执行引擎,通过多CUDA流实现参数预取、计算和梯度卸载的重叠执行,确保GPU持续运算;2)用无状态层模板替代持久自动求导图,在参数流入时动态绑定权重,既消除持久图元数据,又提供调度灵活性。在配备1.5TB主机内存的单个H200 GPU上,MegaTrain可稳定训练高达1200亿参数的模型。在训练140亿参数模型时,其训练吞吐量达到DeepSpeed ZeRO-3结合CPU卸载方案的1.84倍。该系统还支持在单个GH200上训练70亿参数模型并处理512k标记的上下文长度。
English
We present MegaTrain, a memory-centric system that efficiently trains 100B+ parameter large language models at full precision on a single GPU. Unlike traditional GPU-centric systems, MegaTrain stores parameters and optimizer states in host memory (CPU memory) and treats GPUs as transient compute engines. For each layer, we stream parameters in and compute gradients out, minimizing persistent device state. To battle the CPU-GPU bandwidth bottleneck, we adopt two key optimizations. 1) We introduce a pipelined double-buffered execution engine that overlaps parameter prefetching, computation, and gradient offloading across multiple CUDA streams, enabling continuous GPU execution. 2) We replace persistent autograd graphs with stateless layer templates, binding weights dynamically as they stream in, eliminating persistent graph metadata while providing flexibility in scheduling. On a single H200 GPU with 1.5TB host memory, MegaTrain reliably trains models up to 120B parameters. It also achieves 1.84times the training throughput of DeepSpeed ZeRO-3 with CPU offloading when training 14B models. MegaTrain also enables 7B model training with 512k token context on a single GH200.
PDF241April 9, 2026