ChatPaper.aiChatPaper

BitsFusion:扩散模型的1.99比特权重量化

BitsFusion: 1.99 bits Weight Quantization of Diffusion Model

June 6, 2024
作者: Yang Sui, Yanyu Li, Anil Kag, Yerlan Idelbayev, Junli Cao, Ju Hu, Dhritiman Sagar, Bo Yuan, Sergey Tulyakov, Jian Ren
cs.AI

摘要

最近几年,基于扩散的图像生成模型通过展示合成高质量内容的能力取得了巨大成功。然而,这些模型包含大量参数,导致模型尺寸显著庞大。保存和传输它们对于各种应用来说都是一个主要瓶颈,尤其是在资源受限设备上运行的应用。在这项工作中,我们开发了一种新颖的权重量化方法,将 UNet 从 Stable Diffusion v1.5 量化到 1.99 位,实现了一个模型,尺寸减小了 7.9 倍,同时展现出比原始模型更好的生成质量。我们的方法包括几种新颖技术,如为每个层分配最佳位数,初始化量化模型以获得更好的性能,并改进训练策略以显著减少量化误差。此外,我们广泛评估了我们的量化模型在各种基准数据集上,并通过人类评估展示了其优越的生成质量。
English
Diffusion-based image generation models have achieved great success in recent years by showing the capability of synthesizing high-quality content. However, these models contain a huge number of parameters, resulting in a significantly large model size. Saving and transferring them is a major bottleneck for various applications, especially those running on resource-constrained devices. In this work, we develop a novel weight quantization method that quantizes the UNet from Stable Diffusion v1.5 to 1.99 bits, achieving a model with 7.9X smaller size while exhibiting even better generation quality than the original one. Our approach includes several novel techniques, such as assigning optimal bits to each layer, initializing the quantized model for better performance, and improving the training strategy to dramatically reduce quantization error. Furthermore, we extensively evaluate our quantized model across various benchmark datasets and through human evaluation to demonstrate its superior generation quality.

Summary

AI-Generated Summary

PDF393December 8, 2024