ChatPaper.aiChatPaper

通过联合图像-特征合成提升生成式图像建模

Boosting Generative Image Modeling via Joint Image-Feature Synthesis

April 22, 2025
作者: Theodoros Kouzelis, Efstathios Karypidis, Ioannis Kakogeorgiou, Spyros Gidaris, Nikos Komodakis
cs.AI

摘要

潜在扩散模型(LDMs)在高质量图像生成领域占据主导地位,然而将表征学习与生成建模相结合仍是一大挑战。我们提出了一种新颖的生成式图像建模框架,通过利用扩散模型共同建模来自变分自编码器的低层次图像潜在特征和来自预训练自监督编码器(如DINO)的高层次语义特征,无缝弥合了这一鸿沟。我们的潜在语义扩散方法能够从纯噪声中学习生成连贯的图像-特征对,显著提升了生成质量和训练效率,且仅需对标准扩散Transformer架构进行最小改动。通过摒弃复杂的蒸馏目标,这一统一设计简化了训练过程,并解锁了一种强大的新推理策略:表征引导,该策略利用学习到的语义来引导和优化图像生成。在条件与非条件设置下的评估中,我们的方法在图像质量和训练收敛速度上均实现了显著提升,为表征感知的生成建模开辟了新的方向。
English
Latent diffusion models (LDMs) dominate high-quality image generation, yet integrating representation learning with generative modeling remains a challenge. We introduce a novel generative image modeling framework that seamlessly bridges this gap by leveraging a diffusion model to jointly model low-level image latents (from a variational autoencoder) and high-level semantic features (from a pretrained self-supervised encoder like DINO). Our latent-semantic diffusion approach learns to generate coherent image-feature pairs from pure noise, significantly enhancing both generative quality and training efficiency, all while requiring only minimal modifications to standard Diffusion Transformer architectures. By eliminating the need for complex distillation objectives, our unified design simplifies training and unlocks a powerful new inference strategy: Representation Guidance, which leverages learned semantics to steer and refine image generation. Evaluated in both conditional and unconditional settings, our method delivers substantial improvements in image quality and training convergence speed, establishing a new direction for representation-aware generative modeling.

Summary

AI-Generated Summary

PDF132April 25, 2025