深度学习的微缩放数据格式
Microscaling Data Formats for Deep Learning
October 16, 2023
作者: Bita Darvish Rouhani, Ritchie Zhao, Ankit More, Mathew Hall, Alireza Khodamoradi, Summer Deng, Dhruv Choudhary, Marius Cornea, Eric Dellinger, Kristof Denolf, Stosic Dusan, Venmugil Elango, Maximilian Golub, Alexander Heinecke, Phil James-Roxby, Dharmesh Jani, Gaurav Kolhe, Martin Langhammer, Ada Li, Levi Melnick, Maral Mesmakhosroshahi, Andres Rodriguez, Michael Schulte, Rasoul Shafipour, Lei Shao, Michael Siu, Pradeep Dubey, Paulius Micikevicius, Maxim Naumov, Colin Verilli, Ralph Wittig, Eric Chung
cs.AI
摘要
窄位宽数据格式是降低现代深度学习应用的计算和存储成本的关键。本文评估了Microscaling(MX)数据格式,该格式将每个块的缩放因子与窄浮点和整数类型结合在一起用于单个元素。MX格式平衡了硬件效率、模型准确性和用户摩擦之间的竞争需求。对二十多个基准测试的实证结果表明,MX数据格式作为AI推断和训练的基准FP32的即插即用替代具有实用性且用户摩擦小。我们还展示了首次在次8位以下的权重、激活和梯度上训练生成式语言模型,准确度损失最小且无需修改训练配方。
English
Narrow bit-width data formats are key to reducing the computational and
storage costs of modern deep learning applications. This paper evaluates
Microscaling (MX) data formats that combine a per-block scaling factor with
narrow floating-point and integer types for individual elements.MX formats
balance the competing needs of hardware efficiency, model accuracy, and user
friction. Empirical results on over two dozen benchmarks demonstrate
practicality of MX data formats as a drop-in replacement for baseline FP32 for
AI inference and training with low user friction. We also show the first
instance of training generative language models at sub-8-bit weights,
activations, and gradients with minimal accuracy loss and no modifications to
the training recipe.