ChatPaper.aiChatPaper

自強制:彌合自回歸視頻擴散中的訓練-測試差距

Self Forcing: Bridging the Train-Test Gap in Autoregressive Video Diffusion

June 9, 2025
作者: Xun Huang, Zhengqi Li, Guande He, Mingyuan Zhou, Eli Shechtman
cs.AI

摘要

我們提出了一種名為自我強制(Self Forcing)的新穎訓練範式,專為自迴歸視頻擴散模型設計。該方法旨在解決長期存在的曝光偏差問題,即在推理過程中,基於真實上下文訓練的模型必須根據其自身不完美的輸出來生成序列。與以往基於真實上下文幀去噪未來幀的方法不同,自我強制通過在訓練期間執行帶有鍵值(KV)緩存的自迴歸展開,將每一幀的生成條件設定為先前自我生成的輸出。這一策略使得監督能夠通過視頻層面的整體損失來實現,直接評估整個生成序列的質量,而非僅僅依賴於傳統的逐幀目標。為了確保訓練效率,我們採用了少步擴散模型結合隨機梯度截斷策略,有效平衡了計算成本與性能。此外,我們引入了一種滾動KV緩存機制,實現了高效的自迴歸視頻外推。大量實驗表明,我們的方法在單個GPU上實現了亞秒級延遲的實時流視頻生成,同時在生成質量上匹配甚至超越了顯著更慢且非因果的擴散模型。項目網站:http://self-forcing.github.io/
English
We introduce Self Forcing, a novel training paradigm for autoregressive video diffusion models. It addresses the longstanding issue of exposure bias, where models trained on ground-truth context must generate sequences conditioned on their own imperfect outputs during inference. Unlike prior methods that denoise future frames based on ground-truth context frames, Self Forcing conditions each frame's generation on previously self-generated outputs by performing autoregressive rollout with key-value (KV) caching during training. This strategy enables supervision through a holistic loss at the video level that directly evaluates the quality of the entire generated sequence, rather than relying solely on traditional frame-wise objectives. To ensure training efficiency, we employ a few-step diffusion model along with a stochastic gradient truncation strategy, effectively balancing computational cost and performance. We further introduce a rolling KV cache mechanism that enables efficient autoregressive video extrapolation. Extensive experiments demonstrate that our approach achieves real-time streaming video generation with sub-second latency on a single GPU, while matching or even surpassing the generation quality of significantly slower and non-causal diffusion models. Project website: http://self-forcing.github.io/
PDF162June 11, 2025