veScale-FSDP:大规模灵活高性能全分片数据并行技术
veScale-FSDP: Flexible and High-Performance FSDP at Scale
February 25, 2026
作者: Zezhou Wang, Youjie Li, Zhiqi Lin, Jiacheng Yang, Cong Xie, Guanyu Feng, Zheng Zhong, Ziyue Huang, Hongyu Zhu, Zhi Zhang, Yanghua Peng, Xin Liu
cs.AI
摘要
全分片数据并行(FSDP),亦称零冗余优化器(ZeRO),因其灵活性高且对模型代码侵入性小的特点,被广泛用于大规模模型训练。然而,现有FSDP系统难以适配结构感知训练方法(如分块量化训练),也无法有效支持前沿模型(如Gemini、Kimi K2)采用的非逐元素优化器(如Shampoo、Muon)。FSDP固定的逐元素或逐行分片格式与块状结构计算模式存在冲突。此外,当前实现方案在通信和内存效率方面存在不足,限制了其向数万张GPU的扩展能力。我们推出veScale-FSDP——通过耦合灵活分片格式RaggedShard与结构感知规划算法,重新设计的FSDP系统在保证灵活性的同时实现大规模高性能训练。该系统原生支持FSDP所需的高效数据布局,赋能分块量化和非逐元素优化器。实验表明,veScale-FSDP相比现有FSDP系统可实现5~66%的吞吐量提升和16~30%的内存占用降低,并能高效扩展至数万张GPU规模。
English
Fully Sharded Data Parallel (FSDP), also known as ZeRO, is widely used for training large-scale models, featuring its flexibility and minimal intrusion on model code. However, current FSDP systems struggle with structure-aware training methods (e.g., block-wise quantized training) and with non-element-wise optimizers (e.g., Shampoo and Muon) used in cutting-edge models (e.g., Gemini, Kimi K2). FSDP's fixed element- or row-wise sharding formats conflict with the block-structured computations. In addition, today's implementations fall short in communication and memory efficiency, limiting scaling to tens of thousands of GPUs. We introduce veScale-FSDP, a redesigned FSDP system that couples a flexible sharding format, RaggedShard, with a structure-aware planning algorithm to deliver both flexibility and performance at scale. veScale-FSDP natively supports efficient data placement required by FSDP, empowering block-wise quantization and non-element-wise optimizers. As a result, veScale-FSDP achieves 5~66% higher throughput and 16~30% lower memory usage than existing FSDP systems, while scaling efficiently to tens of thousands of GPUs.