ChatPaper.aiChatPaper

DeCo:面向端到端图像生成的频率解耦像素扩散

DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation

November 24, 2025
作者: Zehong Ma, Longhui Wei, Shuai Wang, Shiliang Zhang, Qi Tian
cs.AI

摘要

像素扩散方法旨在以端到端方式直接在像素空间生成图像。该方法规避了双阶段潜在扩散中VAE的局限性,具备更高的模型容量。现有像素扩散模型因通常采用单一扩散变换器同时建模高频信号与低频语义,存在训练和推理速度缓慢的问题。为探索更高效的像素扩散范式,我们提出频率解耦像素扩散框架。基于高低频分量生成解耦的直觉,我们利用轻量级像素解码器在DiT语义引导下生成高频细节,从而使DiT专注于低频语义建模。此外,我们引入频率感知流匹配损失函数,强化视觉显著频率的生成同时抑制非显著分量。大量实验表明,DeCo在像素扩散模型中实现卓越性能,在ImageNet上达到1.62(256×256)和2.22(512×512)的FID指标,显著缩小了与潜在扩散方法的差距。进一步地,我们预训练的文本到图像模型在系统级对比中以0.86的综合得分在GenEval榜单领先。代码已开源:https://github.com/Zehong-Ma/DeCo。
English
Pixel diffusion aims to generate images directly in pixel space in an end-to-end fashion. This approach avoids the limitations of VAE in the two-stage latent diffusion, offering higher model capacity. Existing pixel diffusion models suffer from slow training and inference, as they usually model both high-frequency signals and low-frequency semantics within a single diffusion transformer (DiT). To pursue a more efficient pixel diffusion paradigm, we propose the frequency-DeCoupled pixel diffusion framework. With the intuition to decouple the generation of high and low frequency components, we leverage a lightweight pixel decoder to generate high-frequency details conditioned on semantic guidance from the DiT. This thus frees the DiT to specialize in modeling low-frequency semantics. In addition, we introduce a frequency-aware flow-matching loss that emphasizes visually salient frequencies while suppressing insignificant ones. Extensive experiments show that DeCo achieves superior performance among pixel diffusion models, attaining FID of 1.62 (256x256) and 2.22 (512x512) on ImageNet, closing the gap with latent diffusion methods. Furthermore, our pretrained text-to-image model achieves a leading overall score of 0.86 on GenEval in system-level comparison. Codes are publicly available at https://github.com/Zehong-Ma/DeCo.
PDF643February 7, 2026