ChatPaper.aiChatPaper

無需引導的視覺生成

Visual Generation Without Guidance

January 26, 2025
作者: Huayu Chen, Kai Jiang, Kaiwen Zheng, Jianfei Chen, Hang Su, Jun Zhu
cs.AI

摘要

在各種視覺生成模型中,無分類器引導(Classifier-Free Guidance,CFG)一直是一種默認技術,但在抽樣過程中需要從有條件和無條件模型進行推理。我們提出建立無引導抽樣的視覺模型。所得到的算法,即無引導訓練(Guidance-Free Training,GFT),在將抽樣減少到單一模型的同時,與CFG的性能相匹配,將計算成本減半。與依賴預訓練的CFG網絡的先前蒸餾方法不同,GFT使得可以直接從頭開始進行訓練。GFT實施簡單。它保留了與CFG相同的最大似然目標,主要區別在於有條件模型的參數化。實施GFT只需要對現有代碼進行最小的修改,因為大多數設計選擇和超參數都是直接從CFG繼承而來。我們在五種不同的視覺模型上進行了大量實驗,展示了GFT的有效性和多功能性。在擴散、自回歸和遮罩預測建模領域,GFT始終實現了與CFG基準相當甚至更低的FID分數,並在無引導的情況下實現了類似的多樣性-保真度折衷。代碼將在https://github.com/thu-ml/GFT 上提供。
English
Classifier-Free Guidance (CFG) has been a default technique in various visual generative models, yet it requires inference from both conditional and unconditional models during sampling. We propose to build visual models that are free from guided sampling. The resulting algorithm, Guidance-Free Training (GFT), matches the performance of CFG while reducing sampling to a single model, halving the computational cost. Unlike previous distillation-based approaches that rely on pretrained CFG networks, GFT enables training directly from scratch. GFT is simple to implement. It retains the same maximum likelihood objective as CFG and differs mainly in the parameterization of conditional models. Implementing GFT requires only minimal modifications to existing codebases, as most design choices and hyperparameters are directly inherited from CFG. Our extensive experiments across five distinct visual models demonstrate the effectiveness and versatility of GFT. Across domains of diffusion, autoregressive, and masked-prediction modeling, GFT consistently achieves comparable or even lower FID scores, with similar diversity-fidelity trade-offs compared with CFG baselines, all while being guidance-free. Code will be available at https://github.com/thu-ml/GFT.

Summary

AI-Generated Summary

PDF83January 28, 2025