ChatPaper.aiChatPaper

基于Transformer的可扩展生成对抗网络

Scalable GANs with Transformers

September 29, 2025
作者: Sangeek Hyun, MinKyu Lee, Jae-Pil Heo
cs.AI

摘要

可扩展性推动了生成建模领域的最新进展,然而其在对抗学习中的原理仍待深入探索。我们通过两项在其他类型生成模型中已被证明有效的设计选择,来研究生成对抗网络(GANs)的可扩展性:在紧凑的变分自编码器潜在空间中进行训练,以及采用纯基于Transformer的生成器和判别器。在潜在空间中进行训练既保持了感知保真度,又实现了高效计算,这种效率与纯Transformer模型天然契合,后者的性能随计算预算的增加而提升。基于这些选择,我们分析了在简单扩展GANs时出现的失败模式。具体而言,我们发现生成器早期层利用不足以及网络扩展时优化不稳定的问题。为此,我们提出了简单且适应扩展的解决方案,如轻量级中间监督和宽度感知的学习率调整。实验表明,GAT——一种纯基于Transformer且在潜在空间训练的GAN,能够轻松可靠地在多种容量规模(从S到XL)下进行训练。此外,GAT-XL/2在ImageNet-256上仅用40个epoch就实现了单步、类别条件生成的最先进性能(FID为2.96),比强基线模型少用了6倍的训练周期。
English
Scalability has driven recent advances in generative modeling, yet its principles remain underexplored for adversarial learning. We investigate the scalability of Generative Adversarial Networks (GANs) through two design choices that have proven to be effective in other types of generative models: training in a compact Variational Autoencoder latent space and adopting purely transformer-based generators and discriminators. Training in latent space enables efficient computation while preserving perceptual fidelity, and this efficiency pairs naturally with plain transformers, whose performance scales with computational budget. Building on these choices, we analyze failure modes that emerge when naively scaling GANs. Specifically, we find issues as underutilization of early layers in the generator and optimization instability as the network scales. Accordingly, we provide simple and scale-friendly solutions as lightweight intermediate supervision and width-aware learning-rate adjustment. Our experiments show that GAT, a purely transformer-based and latent-space GANs, can be easily trained reliably across a wide range of capacities (S through XL). Moreover, GAT-XL/2 achieves state-of-the-art single-step, class-conditional generation performance (FID of 2.96) on ImageNet-256 in just 40 epochs, 6x fewer epochs than strong baselines.
PDF12October 1, 2025