ChatPaper.aiChatPaper

离散数据的简化和泛化遮蔽扩散

Simplified and Generalized Masked Diffusion for Discrete Data

June 6, 2024
作者: Jiaxin Shi, Kehang Han, Zhe Wang, Arnaud Doucet, Michalis K. Titsias
cs.AI

摘要

掩码扩散(或吸收扩散)被积极探索作为离散数据生成建模的自回归模型的替代方案。然而,该领域现有工作受到模型公式过于复杂和不同视角之间关系不清晰的阻碍,导致参数化、训练目标和临时调整方面存在亚优化问题。在本研究中,我们旨在提供一个简单且通用的框架,释放掩码扩散模型的全部潜力。我们展示掩码扩散模型的连续时间变分目标是交叉熵损失的简单加权积分。我们的框架还能够训练具有状态相关掩码调度的广义掩码扩散模型。通过困惑度评估,我们在OpenWebText上训练的模型在GPT-2规模上超越先前的扩散语言模型,并在5个零样本语言建模任务中表现出色。此外,我们的模型在像素级图像建模方面远远优于先前的离散扩散模型,在CIFAR-10上达到2.78比特/维度,在ImageNet 64×64上达到3.42比特/维度,与类似规模的自回归模型相当或更好。
English
Masked (or absorbing) diffusion is actively explored as an alternative to autoregressive models for generative modeling of discrete data. However, existing work in this area has been hindered by unnecessarily complex model formulations and unclear relationships between different perspectives, leading to suboptimal parameterization, training objectives, and ad hoc adjustments to counteract these issues. In this work, we aim to provide a simple and general framework that unlocks the full potential of masked diffusion models. We show that the continuous-time variational objective of masked diffusion models is a simple weighted integral of cross-entropy losses. Our framework also enables training generalized masked diffusion models with state-dependent masking schedules. When evaluated by perplexity, our models trained on OpenWebText surpass prior diffusion language models at GPT-2 scale and demonstrate superior performance on 4 out of 5 zero-shot language modeling tasks. Furthermore, our models vastly outperform previous discrete diffusion models on pixel-level image modeling, achieving 2.78~(CIFAR-10) and 3.42 (ImageNet 64times64) bits per dimension that are comparable or better than autoregressive models of similar sizes.

Summary

AI-Generated Summary

PDF70December 8, 2024