ChatPaper.aiChatPaper

FLIQS:一次混合精度浮點和整數量化搜尋

FLIQS: One-Shot Mixed-Precision Floating-Point and Integer Quantization Search

August 7, 2023
作者: Jordan Dotzel, Gang Wu, Andrew Li, Muhammad Umar, Yun Ni, Mohamed S. Abdelfattah, Zhiru Zhang, Liqun Cheng, Martin G. Dixon, Norman P. Jouppi, Quoc V. Le, Sheng Li
cs.AI

摘要

量化已成為現代深度神經網絡(DNNs)減小模型尺寸、計算需求和能源消耗的主流壓縮技術。隨著近期硬體中數值支持的改進,包括多種整數和浮點數的變體,混合精度量化已成為實現高質量結果並降低模型成本的必要手段。先前的混合精度量化方法採用了後訓練量化搜索,這會影響準確性,或者可微量化搜索,但會導致分支帶來的高內存使用。因此,我們提出了首個一次性混合精度量化搜索,可在整數和低精度浮點模型中消除重新訓練的需求。我們在多個卷積網絡和視覺轉換器模型上評估我們的浮點和整數量化搜索(FLIQS),以發現帕累托最優模型。我們的方法發現了優於均勻精度、手動混合精度和最近整數量化搜索方法的模型。通過提出的整數量化搜索,我們使ResNet-18在ImageNet上的準確性提高了1.31個百分點,ResNet-50提高了0.90個百分點,與先前方法相比,模型成本相當。此外,我們首次探索了一種新穎的混合精度浮點搜索,相對於先前最先進的FP8模型,將MobileNetV2提高了高達0.98個百分點。最後,我們將FLIQS擴展到同時搜索聯合量化和神經架構空間,並在MobileNetV2搜索空間上將ImageNet的準確性提高了2.69個百分點,並保持相似的模型成本。
English
Quantization has become a mainstream compression technique for reducing model size, computational requirements, and energy consumption for modern deep neural networks (DNNs). With the improved numerical support in recent hardware, including multiple variants of integer and floating point, mixed-precision quantization has become necessary to achieve high-quality results with low model cost. Prior mixed-precision quantization methods have performed a post-training quantization search, which compromises on accuracy, or a differentiable quantization search, which leads to high memory usage from branching. Therefore, we propose the first one-shot mixed-precision quantization search that eliminates the need for retraining in both integer and low-precision floating point models. We evaluate our floating-point and integer quantization search (FLIQS) on multiple convolutional networks and vision transformer models to discover Pareto-optimal models. Our approach discovers models that improve upon uniform precision, manual mixed-precision, and recent integer quantization search methods. With the proposed integer quantization search, we increase the accuracy of ResNet-18 on ImageNet by 1.31% points and ResNet-50 by 0.90% points with equivalent model cost over previous methods. Additionally, for the first time, we explore a novel mixed-precision floating-point search and improve MobileNetV2 by up to 0.98% points compared to prior state-of-the-art FP8 models. Finally, we extend FLIQS to simultaneously search a joint quantization and neural architecture space and improve the ImageNet accuracy by 2.69% points with similar model cost on a MobileNetV2 search space.
PDF60December 15, 2024