ChatPaper.aiChatPaper

环境扩散全能:用劣质数据训练优质模型

Ambient Diffusion Omni: Training Good Models with Bad Data

June 10, 2025
作者: Giannis Daras, Adrian Rodriguez-Munoz, Adam Klivans, Antonio Torralba, Constantinos Daskalakis
cs.AI

摘要

我们展示了如何利用低质量、合成以及分布外图像来提升扩散模型的质量。通常,扩散模型是在经过精心筛选的数据集上训练的,这些数据集源自网络及其他来源中高度过滤的数据池。我们揭示,那些常被舍弃的低质量图像中蕴含着巨大价值。为此,我们提出了Ambient Diffusion Omni,一个简洁而原则性的框架,用于训练扩散模型,使其能在训练过程中从所有可用图像中提取有效信号。该框架利用了自然图像的两个特性——频谱功率律衰减和局部性。我们首先通过成功训练使用高斯模糊、JPEG压缩和运动模糊等合成损坏图像增强的扩散模型,验证了该框架的有效性。随后,我们应用该框架在ImageNet上取得了领先的FID(弗雷歇起始距离)成绩,并在文本到图像生成任务中显著提升了图像质量和多样性。核心洞见在于,噪声能够缓解期望的高质量分布与实际观察到的混合分布之间的初始偏斜。通过分析在扩散时间尺度上从有偏数据与有限无偏数据中学习的权衡,我们为该方法提供了严格的理论依据。
English
We show how to use low-quality, synthetic, and out-of-distribution images to improve the quality of a diffusion model. Typically, diffusion models are trained on curated datasets that emerge from highly filtered data pools from the Web and other sources. We show that there is immense value in the lower-quality images that are often discarded. We present Ambient Diffusion Omni, a simple, principled framework to train diffusion models that can extract signal from all available images during training. Our framework exploits two properties of natural images -- spectral power law decay and locality. We first validate our framework by successfully training diffusion models with images synthetically corrupted by Gaussian blur, JPEG compression, and motion blur. We then use our framework to achieve state-of-the-art ImageNet FID, and we show significant improvements in both image quality and diversity for text-to-image generative modeling. The core insight is that noise dampens the initial skew between the desired high-quality distribution and the mixed distribution we actually observe. We provide rigorous theoretical justification for our approach by analyzing the trade-off between learning from biased data versus limited unbiased data across diffusion times.
PDF62June 18, 2025