PixelGen:基于感知损失的像素扩散模型超越隐式扩散模型
PixelGen: Pixel Diffusion Beats Latent Diffusion with Perceptual Loss
February 2, 2026
作者: Zehong Ma, Ruihan Xu, Shiliang Zhang
cs.AI
摘要
像素扩散技术以端到端方式直接在像素空间生成图像,避免了双阶段潜在扩散中VAE引入的伪影与瓶颈效应。然而,高维像素流形包含大量感知无关信号,其优化极具挑战性,导致现有像素扩散方法性能落后于潜在扩散模型。我们提出PixelGen——一种配备感知监督的简易像素扩散框架。该框架通过引入两种互补的感知损失函数,引导扩散模型学习更具意义的感知流形,而非对完整图像流形进行建模。其中LPIPS损失促进局部模式学习,基于DINO的感知损失则强化全局语义理解。在感知监督下,PixelGen超越了强潜在扩散基线模型:无需分类器引导即在ImageNet-256上取得5.11的FID分数(仅80训练轮次),在大规模文生图任务中展现出0.79的GenEval评分优势。该方案无需VAE编码器、潜在表示或辅助训练阶段,构建了更简洁却更强大的生成范式。代码已开源于https://github.com/Zehong-Ma/PixelGen。
English
Pixel diffusion generates images directly in pixel space in an end-to-end manner, avoiding the artifacts and bottlenecks introduced by VAEs in two-stage latent diffusion. However, it is challenging to optimize high-dimensional pixel manifolds that contain many perceptually irrelevant signals, leaving existing pixel diffusion methods lagging behind latent diffusion models. We propose PixelGen, a simple pixel diffusion framework with perceptual supervision. Instead of modeling the full image manifold, PixelGen introduces two complementary perceptual losses to guide diffusion model towards learning a more meaningful perceptual manifold. An LPIPS loss facilitates learning better local patterns, while a DINO-based perceptual loss strengthens global semantics. With perceptual supervision, PixelGen surpasses strong latent diffusion baselines. It achieves an FID of 5.11 on ImageNet-256 without classifier-free guidance using only 80 training epochs, and demonstrates favorable scaling performance on large-scale text-to-image generation with a GenEval score of 0.79. PixelGen requires no VAEs, no latent representations, and no auxiliary stages, providing a simpler yet more powerful generative paradigm. Codes are publicly available at https://github.com/Zehong-Ma/PixelGen.