SANA-Sprint:一步扩散与连续时间一致性蒸馏
SANA-Sprint: One-Step Diffusion with Continuous-Time Consistency Distillation
March 12, 2025
作者: Junsong Chen, Shuchen Xue, Yuyang Zhao, Jincheng Yu, Sayak Paul, Junyu Chen, Han Cai, Enze Xie, Song Han
cs.AI
摘要
本文介紹了SANA-Sprint,一種用於超快速文本到圖像(T2I)生成的高效擴散模型。SANA-Sprint基於預訓練的基礎模型,並通過混合蒸餾技術進行增強,將推理步驟從20步大幅減少至1-4步。我們提出了三項關鍵創新:(1)我們提出了一種無需訓練的方法,將預訓練的流匹配模型轉化為連續時間一致性蒸餾(sCM),避免了從頭開始訓練的高昂成本,實現了高效的訓練。我們的混合蒸餾策略結合了sCM與潛在對抗蒸餾(LADD):sCM確保與教師模型的一致性,而LADD則提升了單步生成的保真度。(2)SANA-Sprint是一個統一的步數自適應模型,能夠在1-4步內實現高質量生成,消除了步數特定訓練的需求,提高了效率。(3)我們將ControlNet與SANA-Sprint集成,實現了實時交互式圖像生成,為用戶交互提供即時視覺反饋。SANA-Sprint在速度與質量的權衡中建立了新的帕累托前沿,僅在1步內便達到了7.59 FID和0.74 GenEval的頂尖性能,超越了FLUX-schnell(7.94 FID / 0.71 GenEval),同時速度提升了10倍(H100上0.1秒對比1.1秒)。此外,它在H100上實現了1024 x 1024圖像的0.1秒(T2I)和0.25秒(ControlNet)延遲,在RTX 4090上實現了0.31秒(T2I)的延遲,展示了其卓越的效率及在AI驅動消費應用(AIPC)中的潛力。代碼與預訓練模型將開源。
English
This paper presents SANA-Sprint, an efficient diffusion model for ultra-fast
text-to-image (T2I) generation. SANA-Sprint is built on a pre-trained
foundation model and augmented with hybrid distillation, dramatically reducing
inference steps from 20 to 1-4. We introduce three key innovations: (1) We
propose a training-free approach that transforms a pre-trained flow-matching
model for continuous-time consistency distillation (sCM), eliminating costly
training from scratch and achieving high training efficiency. Our hybrid
distillation strategy combines sCM with latent adversarial distillation (LADD):
sCM ensures alignment with the teacher model, while LADD enhances single-step
generation fidelity. (2) SANA-Sprint is a unified step-adaptive model that
achieves high-quality generation in 1-4 steps, eliminating step-specific
training and improving efficiency. (3) We integrate ControlNet with SANA-Sprint
for real-time interactive image generation, enabling instant visual feedback
for user interaction. SANA-Sprint establishes a new Pareto frontier in
speed-quality tradeoffs, achieving state-of-the-art performance with 7.59 FID
and 0.74 GenEval in only 1 step - outperforming FLUX-schnell (7.94 FID / 0.71
GenEval) while being 10x faster (0.1s vs 1.1s on H100). It also achieves 0.1s
(T2I) and 0.25s (ControlNet) latency for 1024 x 1024 images on H100, and 0.31s
(T2I) on an RTX 4090, showcasing its exceptional efficiency and potential for
AI-powered consumer applications (AIPC). Code and pre-trained models will be
open-sourced.Summary
AI-Generated Summary