ChatPaper.aiChatPaper

分析和改善擴散模型的訓練動態。

Analyzing and Improving the Training Dynamics of Diffusion Models

December 5, 2023
作者: Tero Karras, Miika Aittala, Jaakko Lehtinen, Janne Hellsten, Timo Aila, Samuli Laine
cs.AI

摘要

擴散模型目前主導數據驅動圖像合成領域,其在大型數據集上的無與倫比的擴展能力。在本文中,我們識別並糾正了流行的ADM擴散模型架構中不均勻和無效訓練的幾個原因,而無需改變其高層結構。觀察到在訓練過程中網絡激活和權重中的不受控制的幅度變化和不平衡,我們重新設計了網絡層,以保持期望上的激活、權重和更新幅度。我們發現系統應用這一理念能夠消除觀察到的漂移和不平衡,使網絡在相同計算複雜度下顯著改善。我們的修改將ImageNet-512合成中以往的FID記錄從2.41改進到1.81,並使用快速確定性抽樣實現。 作為獨立貢獻,我們提出了一種方法來設置指數移動平均(EMA)參數後設,即在完成訓練運行後。這允許精確調整EMA長度,而無需進行多次訓練運行,並揭示了它與網絡架構、訓練時間和指導之間令人驚訝的互動。
English
Diffusion models currently dominate the field of data-driven image synthesis with their unparalleled scaling to large datasets. In this paper, we identify and rectify several causes for uneven and ineffective training in the popular ADM diffusion model architecture, without altering its high-level structure. Observing uncontrolled magnitude changes and imbalances in both the network activations and weights over the course of training, we redesign the network layers to preserve activation, weight, and update magnitudes on expectation. We find that systematic application of this philosophy eliminates the observed drifts and imbalances, resulting in considerably better networks at equal computational complexity. Our modifications improve the previous record FID of 2.41 in ImageNet-512 synthesis to 1.81, achieved using fast deterministic sampling. As an independent contribution, we present a method for setting the exponential moving average (EMA) parameters post-hoc, i.e., after completing the training run. This allows precise tuning of EMA length without the cost of performing several training runs, and reveals its surprising interactions with network architecture, training time, and guidance.
PDF342December 15, 2024