ChatPaper.aiChatPaper

DeCo:頻率解耦像素擴散實現端到端圖像生成

DeCo: Frequency-Decoupled Pixel Diffusion for End-to-End Image Generation

November 24, 2025
作者: Zehong Ma, Longhui Wei, Shuai Wang, Shiliang Zhang, Qi Tian
cs.AI

摘要

像素擴散技術旨在以端到端方式直接在像素空間中生成圖像。這種方法避免了兩階段潛在擴散中VAE的局限性,提供了更高的模型容量。現有的像素擴散模型存在訓練和推理速度緩慢的問題,因為它們通常使用單一擴散轉換器同時建模高頻信號和低頻語義。為追求更高效的像素擴散範式,我們提出頻率解耦像素擴散框架。基於分離高低頻分量生成的直覺思路,我們採用輕量級像素解碼器在擴散轉換器的語義引導下生成高頻細節,從而使擴散轉換器專注於低頻語義建模。此外,我們引入頻率感知流匹配損失函數,強調視覺顯著頻率同時抑制次要頻率。大量實驗表明,DeCo在像素擴散模型中實現卓越性能,在ImageNet數據集上分別達到1.62(256×256)和2.22(512×512)的FID分數,縮小了與潛在擴散方法的差距。我們的預訓練文生圖模型在系統級比較中更以0.86的綜合得分在GenEval基準上保持領先。代碼已開源於https://github.com/Zehong-Ma/DeCo。
English
Pixel diffusion aims to generate images directly in pixel space in an end-to-end fashion. This approach avoids the limitations of VAE in the two-stage latent diffusion, offering higher model capacity. Existing pixel diffusion models suffer from slow training and inference, as they usually model both high-frequency signals and low-frequency semantics within a single diffusion transformer (DiT). To pursue a more efficient pixel diffusion paradigm, we propose the frequency-DeCoupled pixel diffusion framework. With the intuition to decouple the generation of high and low frequency components, we leverage a lightweight pixel decoder to generate high-frequency details conditioned on semantic guidance from the DiT. This thus frees the DiT to specialize in modeling low-frequency semantics. In addition, we introduce a frequency-aware flow-matching loss that emphasizes visually salient frequencies while suppressing insignificant ones. Extensive experiments show that DeCo achieves superior performance among pixel diffusion models, attaining FID of 1.62 (256x256) and 2.22 (512x512) on ImageNet, closing the gap with latent diffusion methods. Furthermore, our pretrained text-to-image model achieves a leading overall score of 0.86 on GenEval in system-level comparison. Codes are publicly available at https://github.com/Zehong-Ma/DeCo.
PDF643February 7, 2026