TPDiff:時序金字塔視頻擴散模型
TPDiff: Temporal Pyramid Video Diffusion Model
March 12, 2025
作者: Lingmin Ran, Mike Zheng Shou
cs.AI
摘要
視頻擴散模型的發展揭示了一個重大挑戰:巨大的計算需求。為緩解這一挑戰,我們注意到擴散的反向過程具有固有的熵減特性。考慮到視頻模態中的幀間冗餘,在高熵階段維持全幀率是不必要的。基於這一洞察,我們提出了TPDiff,一個提升訓練和推理效率的統一框架。通過將擴散過程劃分為多個階段,我們的框架在擴散過程中逐步增加幀率,僅在最後階段以全幀率運行,從而優化計算效率。為了訓練多階段擴散模型,我們引入了一種專用的訓練框架:分階段擴散。通過在對齊的數據和噪聲下求解擴散的劃分概率流常微分方程(ODE),我們的訓練策略適用於各種擴散形式,並進一步提升了訓練效率。全面的實驗評估驗證了我們方法的通用性,展示了訓練成本降低50%和推理效率提升1.5倍的顯著效果。
English
The development of video diffusion models unveils a significant challenge:
the substantial computational demands. To mitigate this challenge, we note that
the reverse process of diffusion exhibits an inherent entropy-reducing nature.
Given the inter-frame redundancy in video modality, maintaining full frame
rates in high-entropy stages is unnecessary. Based on this insight, we propose
TPDiff, a unified framework to enhance training and inference efficiency. By
dividing diffusion into several stages, our framework progressively increases
frame rate along the diffusion process with only the last stage operating on
full frame rate, thereby optimizing computational efficiency. To train the
multi-stage diffusion model, we introduce a dedicated training framework:
stage-wise diffusion. By solving the partitioned probability flow ordinary
differential equations (ODE) of diffusion under aligned data and noise, our
training strategy is applicable to various diffusion forms and further enhances
training efficiency. Comprehensive experimental evaluations validate the
generality of our method, demonstrating 50% reduction in training cost and 1.5x
improvement in inference efficiency.Summary
AI-Generated Summary