ChatPaper.aiChatPaper

通过添加 NVMe 固态硬盘来启用和加速在单个 GPU 上对 100 亿参数模型进行微调。

Adding NVMe SSDs to Enable and Accelerate 100B Model Fine-tuning on a Single GPU

March 11, 2024
作者: Changyue Liao, Mo Sun, Zihan Yang, Kaiqi Chen, Binhang Yuan, Fei Wu, Zeke Wang
cs.AI

摘要

最近大型语言模型的突破性进展为世界带来了巨大价值,其卓越能力源自其利用的庞大参数数量。然而,即使是目前内存容量最高的GPU,最高可达80GB,也远远不足以容纳这些庞大参数及其相关的优化器状态,进行基于随机梯度下降的优化。一种容纳这些巨大模型的方法是从多个GPU中聚合设备内存。然而,这种方法对于大多数学术研究人员来说成本太高,他们通常只有有限的预算用于许多高端GPU服务器。本文侧重于在商品服务器上,甚至是低端GPU上对巨大模型进行微调,这对大多数人工智能研究人员都是可行的。在这种情况下,最先进的作品ZeRO-Infinity 在商品服务器上运行时存在两个严重问题:1)由于低效的交换,GPU利用率低,2)由于CPU内存容量有限,可训练模型的大小受限。根本原因在于ZeRO-Infinity 是针对高端GPU服务器进行优化的。为此,我们提出了Fuyou,一种低成本训练框架,可以在低端服务器上的低端GPU和有限CPU内存容量上实现高效的100B巨大模型微调。关键思想是将SSD-CPU通信作为一个优化维度,并因此从系统化方法中精心协同优化计算和数据交换,以最大化GPU利用率。实验结果表明:1)Fuyou 能够在消费级GPU RTX 4090 上高效微调175B GPT-3,而ZeRO-Infinity 无法微调;2)在训练小型GPT-3 13B模型时,Fuyou 在RTX 4090 GPU 上实现了156 TFLOPS,而ZeRO-Infinity 只能实现45 TFLOPS。
English
Recent advances in large language models have brought immense value to the world, with their superior capabilities stemming from the massive number of parameters they utilize. However, even the GPUs with the highest memory capacities, currently peaking at 80GB, are far from sufficient to accommodate these vast parameters and their associated optimizer states when conducting stochastic gradient descent-based optimization. One approach to hosting such huge models is to aggregate device memory from many GPUs. However, this approach introduces prohibitive costs for most academic researchers, who always have a limited budget for many high-end GPU servers. In this paper, we focus on huge model fine-tuning on a single, even low-end, GPU in a commodity server, which is accessible to most AI researchers. In such a scenario, the state-of-the-art work ZeRO-Infinity suffers from two severe issues when running in a commodity server: 1) low GPU utilization due to inefficient swapping, and 2) limited trainable model size due to CPU memory capacity. The underlying reason is that ZeRO-Infinity is optimized for running on high-end GPU servers. To this end, we present Fuyou, a low-cost training framework that enables efficient 100B huge model fine-tuning on a low-end server with a low-end GPU and limited CPU memory capacity. The key idea is to add the SSD-CPU communication as an optimization dimension and thus carefully co-optimize computation and data swapping from a systematic approach to maximize GPU utilization. The experimental results show that 1) Fuyou is able to fine-tune 175B GPT-3 on a consumer GPU RTX 4090 with high GPU utilization, while ZeRO-Infinity fails to fine-tune; and 2) when training a small GPT-3 13B model, Fuyou achieves 156 TFLOPS on an RTX 4090 GPU while ZeRO-Infinity only achieves 45 TFLOPS.

Summary

AI-Generated Summary

PDF554December 15, 2024