ChatPaper.aiChatPaper

自强制学习:弥合自回归视频扩散中的训练-测试差距

Self Forcing: Bridging the Train-Test Gap in Autoregressive Video Diffusion

June 9, 2025
作者: Xun Huang, Zhengqi Li, Guande He, Mingyuan Zhou, Eli Shechtman
cs.AI

摘要

我们提出了一种名为自强制(Self Forcing)的新颖训练范式,专为自回归视频扩散模型设计。该方法旨在解决长期存在的曝光偏差问题,即在推理阶段,原本基于真实上下文训练的模型不得不依赖其自身不完美的输出来生成序列。与以往基于真实上下文帧去噪未来帧的方法不同,自强制通过在训练过程中采用键值(KV)缓存进行自回归展开,使每一帧的生成都依赖于先前自生成的输出。这一策略通过视频层面的整体损失函数实现监督,直接评估整个生成序列的质量,而非仅仅依赖传统的逐帧目标。为确保训练效率,我们采用了几步扩散模型结合随机梯度截断策略,有效平衡了计算成本与性能。此外,我们引入了滚动KV缓存机制,实现了高效的自回归视频外推。大量实验表明,我们的方法在单GPU上实现了亚秒级延迟的实时流视频生成,同时生成质量与显著更慢且非因果的扩散模型相当甚至更优。项目网站:http://self-forcing.github.io/
English
We introduce Self Forcing, a novel training paradigm for autoregressive video diffusion models. It addresses the longstanding issue of exposure bias, where models trained on ground-truth context must generate sequences conditioned on their own imperfect outputs during inference. Unlike prior methods that denoise future frames based on ground-truth context frames, Self Forcing conditions each frame's generation on previously self-generated outputs by performing autoregressive rollout with key-value (KV) caching during training. This strategy enables supervision through a holistic loss at the video level that directly evaluates the quality of the entire generated sequence, rather than relying solely on traditional frame-wise objectives. To ensure training efficiency, we employ a few-step diffusion model along with a stochastic gradient truncation strategy, effectively balancing computational cost and performance. We further introduce a rolling KV cache mechanism that enables efficient autoregressive video extrapolation. Extensive experiments demonstrate that our approach achieves real-time streaming video generation with sub-second latency on a single GPU, while matching or even surpassing the generation quality of significantly slower and non-causal diffusion models. Project website: http://self-forcing.github.io/
PDF162June 11, 2025