基於Transformer的可擴展生成對抗網絡
Scalable GANs with Transformers
September 29, 2025
作者: Sangeek Hyun, MinKyu Lee, Jae-Pil Heo
cs.AI
摘要
可擴展性推動了生成模型的最新進展,但在對抗學習領域,其原理仍未得到充分探索。我們通過兩種在其他類型生成模型中已被證明有效的設計選擇,來研究生成對抗網絡(GANs)的可擴展性:在緊湊的變分自編碼器潛在空間中進行訓練,以及採用純基於Transformer的生成器和判別器。在潛在空間中訓練既能保持感知保真度,又能實現高效計算,這種效率與純Transformer模型自然契合,後者的性能隨計算預算的增加而提升。基於這些選擇,我們分析了在簡單擴展GANs時出現的故障模式。具體而言,我們發現了生成器早期層利用不足以及網絡擴展時優化不穩定等問題。因此,我們提供了簡單且適合擴展的解決方案,如輕量級的中間監督和寬度感知的學習率調整。我們的實驗表明,GAT——一種純基於Transformer且工作在潛在空間的GAN,能夠在廣泛的容量範圍內(從S到XL)輕鬆可靠地訓練。此外,GAT-XL/2在ImageNet-256上僅用40個epoch就達到了最先進的單步、類條件生成性能(FID為2.96),比強基線少用了6倍的epoch。
English
Scalability has driven recent advances in generative modeling, yet its
principles remain underexplored for adversarial learning. We investigate the
scalability of Generative Adversarial Networks (GANs) through two design
choices that have proven to be effective in other types of generative models:
training in a compact Variational Autoencoder latent space and adopting purely
transformer-based generators and discriminators. Training in latent space
enables efficient computation while preserving perceptual fidelity, and this
efficiency pairs naturally with plain transformers, whose performance scales
with computational budget. Building on these choices, we analyze failure modes
that emerge when naively scaling GANs. Specifically, we find issues as
underutilization of early layers in the generator and optimization instability
as the network scales. Accordingly, we provide simple and scale-friendly
solutions as lightweight intermediate supervision and width-aware learning-rate
adjustment. Our experiments show that GAT, a purely transformer-based and
latent-space GANs, can be easily trained reliably across a wide range of
capacities (S through XL). Moreover, GAT-XL/2 achieves state-of-the-art
single-step, class-conditional generation performance (FID of 2.96) on
ImageNet-256 in just 40 epochs, 6x fewer epochs than strong baselines.