ChatPaper.aiChatPaper

GAS:通過廣義對抗求解器改進擴散常微分方程的離散化

GAS: Improving Discretization of Diffusion ODEs via Generalized Adversarial Solver

October 20, 2025
作者: Aleksandr Oganov, Ilya Bykov, Eva Neudachina, Mishan Aliev, Alexander Tolmachev, Alexander Sidorov, Aleksandr Zuev, Andrey Okhotin, Denis Rakitin, Aibek Alanov
cs.AI

摘要

儘管擴散模型在生成質量上達到了頂尖水平,但其採樣過程仍面臨計算成本高昂的問題。近期研究通過基於梯度的優化方法來解決這一難題,這些方法從完整的採樣過程中提煉出少步的常微分方程(ODE)擴散求解器,從而將函數評估次數從數十次大幅減少至僅幾次。然而,這些方法往往依賴於複雜的訓練技巧,且並未明確專注於保留細粒度細節。本文提出了一種廣義求解器:它是一種無需額外訓練技巧的ODE採樣器參數化方案,並在質量上超越了現有方法。我們進一步將原始蒸餾損失與對抗訓練相結合,有效減少了偽影並提升了細節保真度。我們將這一成果命名為廣義對抗求解器,並在相似的資源限制條件下,展示了其相較於現有求解器訓練方法的優越性能。相關代碼已公開於https://github.com/3145tttt/GAS。
English
While diffusion models achieve state-of-the-art generation quality, they still suffer from computationally expensive sampling. Recent works address this issue with gradient-based optimization methods that distill a few-step ODE diffusion solver from the full sampling process, reducing the number of function evaluations from dozens to just a few. However, these approaches often rely on intricate training techniques and do not explicitly focus on preserving fine-grained details. In this paper, we introduce the Generalized Solver: a simple parameterization of the ODE sampler that does not require additional training tricks and improves quality over existing approaches. We further combine the original distillation loss with adversarial training, which mitigates artifacts and enhances detail fidelity. We call the resulting method the Generalized Adversarial Solver and demonstrate its superior performance compared to existing solver training methods under similar resource constraints. Code is available at https://github.com/3145tttt/GAS.
PDF202October 22, 2025