ChatPaper.aiChatPaper

BitsFusion: 1.99位元權重量化擴散模型

BitsFusion: 1.99 bits Weight Quantization of Diffusion Model

June 6, 2024
作者: Yang Sui, Yanyu Li, Anil Kag, Yerlan Idelbayev, Junli Cao, Ju Hu, Dhritiman Sagar, Bo Yuan, Sergey Tulyakov, Jian Ren
cs.AI

摘要

最近幾年,基於擴散的影像生成模型展現出合成高品質內容的能力,取得了巨大成功。然而,這些模型包含龐大的參數量,導致模型大小顯著增大。保存和轉移這些模型對於各種應用來說是一個主要瓶頸,尤其是在運行於資源受限設備上的應用。在這項工作中,我們開發了一種新穎的權重量化方法,將 Stable Diffusion v1.5 的 UNet 量化為 1.99 位元,實現了一個模型,其大小減小了 7.9 倍,同時展現出比原始模型更好的生成品質。我們的方法包括幾種新穎技術,如為每個層分配最佳位元、初始化量化模型以獲得更好的性能,以及改進訓練策略以大幅降低量化誤差。此外,我們對我們的量化模型在各種基準數據集上進行了廣泛評估,並通過人類評估來展示其卓越的生成品質。
English
Diffusion-based image generation models have achieved great success in recent years by showing the capability of synthesizing high-quality content. However, these models contain a huge number of parameters, resulting in a significantly large model size. Saving and transferring them is a major bottleneck for various applications, especially those running on resource-constrained devices. In this work, we develop a novel weight quantization method that quantizes the UNet from Stable Diffusion v1.5 to 1.99 bits, achieving a model with 7.9X smaller size while exhibiting even better generation quality than the original one. Our approach includes several novel techniques, such as assigning optimal bits to each layer, initializing the quantized model for better performance, and improving the training strategy to dramatically reduce quantization error. Furthermore, we extensively evaluate our quantized model across various benchmark datasets and through human evaluation to demonstrate its superior generation quality.

Summary

AI-Generated Summary

PDF393December 8, 2024