使用NVFP4預訓練大型語言模型
Pretraining Large Language Models with NVFP4
September 29, 2025
作者: NVIDIA, Felix Abecassis, Anjulie Agrusa, Dong Ahn, Jonah Alben, Stefania Alborghetti, Michael Andersch, Sivakumar Arayandi, Alexis Bjorlin, Aaron Blakeman, Evan Briones, Ian Buck, Bryan Catanzaro, Jinhang Choi, Mike Chrzanowski, Eric Chung, Victor Cui, Steve Dai, Bita Darvish Rouhani, Carlo del Mundo, Deena Donia, Burc Eryilmaz, Henry Estela, Abhinav Goel, Oleg Goncharov, Yugi Guvvala, Robert Hesse, Russell Hewett, Herbert Hum, Ujval Kapasi, Brucek Khailany, Mikail Khona, Nick Knight, Alex Kondratenko, Ronny Krashinsky, Ben Lanir, Simon Layton, Michael Lightstone, Daniel Lo, Paulius Micikevicius, Asit Mishra, Tim Moon, Deepak Narayanan, Chao Ni, Abhijit Paithankar, Satish Pasumarthi, Ankit Patel, Mostofa Patwary, Ashwin Poojary, Gargi Prasad, Sweta Priyadarshi, Yigong Qin, Xiaowei Ren, Oleg Rybakov, Charbel Sakr, Sanjeev Satheesh, Stas Sergienko, Pasha Shamis, Kirthi Shankar, Nishant Sharma, Mohammad Shoeybi, Michael Siu, Misha Smelyanskiy, Darko Stosic, Dusan Stosic, Bor-Yiing Su, Frank Sun, Nima Tajbakhsh, Shelby Thomas, Przemek Tredak, Evgeny Tsykunov, Gandhi Vaithilingam, Aditya Vavre, Rangharajan Venkatesan, Roger Waleffe, Qiyu Wan, Hexin Wang, Mengdi Wang, Lizzie Wei, Hao Wu, Evan Wu, Keith Wyss, Ning Xu, Jinze Xue, Charlene Yang, Yujia Zhai, Ruoxi Zhang, Jingyang Zhu, Zhongbo Zhu
cs.AI
摘要
现今的大型语言模型(LLMs)在众多领域中展现出强大的问题解决能力,并且随着模型规模、训练集规模及训练集质量的提升,其性能持续增强,这一点已通过业界广泛的研究与实验得到证实。训练一个前沿模型现今需要消耗数十至数百尧次浮点运算(yottaflops),这无疑是对时间、计算资源及能源的巨大投入。因此,提升预训练效率对于推动下一代更为强大的LLMs的发展至关重要。尽管8位浮点数(FP8)训练已被广泛采用,但向更低精度如4位浮点数(FP4)的过渡,有望在计算速度与资源利用上带来进一步的提升。然而,这一级别的量化对训练稳定性、收敛性及实施提出了挑战,尤其是在长token序列上训练的大规模模型。
本研究提出了一种采用NVFP4格式稳定且精准训练大型语言模型的新方法。该方法整合了随机哈达玛变换(RHT)以限制块级异常值,采用二维量化方案确保前向与反向传播中的表示一致性,利用随机舍入实现无偏梯度估计,并融入了选择性高精度层。我们通过在10万亿token上训练一个120亿参数的模型——这是迄今为止公开记录的最长4位精度训练过程——验证了该方法的有效性。结果显示,采用基于NVFP4的预训练技术训练的模型,其训练损失与下游任务准确率与FP8基线相当。这些发现表明,NVFP4结合我们的训练方法,标志着窄精度LLM训练算法的一大进步。
English
Large Language Models (LLMs) today are powerful problem solvers across many
domains, and they continue to get stronger as they scale in model size,
training set size, and training set quality, as shown by extensive research and
experimentation across the industry. Training a frontier model today requires
on the order of tens to hundreds of yottaflops, which is a massive investment
of time, compute, and energy. Improving pretraining efficiency is therefore
essential to enable the next generation of even more capable LLMs. While 8-bit
floating point (FP8) training is now widely adopted, transitioning to even
narrower precision, such as 4-bit floating point (FP4), could unlock additional
improvements in computational speed and resource utilization. However,
quantization at this level poses challenges to training stability, convergence,
and implementation, notably for large-scale models trained on long token
horizons.
In this study, we introduce a novel approach for stable and accurate training
of large language models (LLMs) using the NVFP4 format. Our method integrates
Random Hadamard transforms (RHT) to bound block-level outliers, employs a
two-dimensional quantization scheme for consistent representations across both
the forward and backward passes, utilizes stochastic rounding for unbiased
gradient estimation, and incorporates selective high-precision layers. We
validate our approach by training a 12-billion-parameter model on 10 trillion
tokens -- the longest publicly documented training run in 4-bit precision to
date. Our results show that the model trained with our NVFP4-based pretraining
technique achieves training loss and downstream task accuracies comparable to
an FP8 baseline. These findings highlight that NVFP4, when combined with our
training approach, represents a major step forward in narrow-precision LLM
training algorithms.