Stream-R1:面向串流影片生成的可靠性-困惑度感知獎勵蒸餾技術
Stream-R1: Reliability-Perplexity Aware Reward Distillation for Streaming Video Generation
May 5, 2026
作者: Bin Wu, Mengqi Huang, Shaojin Wu, Weinan Jia, Yuxin Wang, Zhendong Mao, Yongdong Zhang
cs.AI
摘要
基於蒸餾的加速技術已成為實現自迴歸串流視訊擴散模型實用化的基礎方法,其中分佈匹配蒸餾(DMD)成為事實上的標準選擇。然而現有方法在訓練學生模型時,會不加區分地匹配教師模型的輸出,將每個推演序列、影格和像素視為同等可靠的監督信號。我們認為這種做法限制了蒸餾品質的上限,因為它忽略了DMD監督中兩個互補的變異維度:跨推演序列的「交互可靠性」——不同學生推演序列的監督信號存在可靠性差異,以及跨空間區域與時間影格的「內部困惑度」——各區域與影格對品質提升的貢獻程度不均。這種目標函數因而在統一權重下混淆了兩個關鍵問題:是否該從每個推演序列學習,以及該在序列內何處集中優化。為解決此問題,我們提出Stream-R1框架,這是一種具備可靠性-困惑度感知的獎勵蒸餾機制,能透過單一共享的獎勵引導機制,在推演序列與時空元素兩個層級自適應地重新加權蒸餾目標。在交互可靠性層面,Stream-R1透過預訓練視訊獎勵分數的指數函數重新調整各推演序列的損失權重,使具備可靠監督的推演序列主導優化過程。在內部困惑度層面,該框架對同一獎勵模型進行反向傳播以提取像素級梯度顯著性,並將其分解為空間與時間權重,從而將優化壓力集中於能產生最大預期增益的區域與影格。此外,自適應平衡機制能防止視覺品質、運動品質與文本對齊這三個品質維度中任一項過度主導優化。在標準串流視訊生成基準測試中,Stream-R1無需修改模型架構或增加推論成本,即可在三個品質維度上持續超越現有蒸餾基線方法。
English
Distillation-based acceleration has become foundational for making autoregressive streaming video diffusion models practical, with distribution matching distillation (DMD) as the de facto choice. Existing methods, however, train the student to match the teacher's output indiscriminately, treating every rollout, frame, and pixel as equally reliable supervision. We argue that this caps distilled quality, since it overlooks two complementary axes of variance in DMD supervision: Inter-Reliability across student rollouts whose supervision varies in reliability, and Intra-Perplexity across spatial regions and temporal frames that contribute unequally to where quality can still be improved. The objective thus conflates two questions under a uniform weight: whether to learn from each rollout, and where to concentrate optimization within it. To address this, we propose Stream-R1, a Reliability-Perplexity Aware Reward Distillation framework that adaptively reweights the distillation objective at both rollout and spatiotemporal-element levels through a single shared reward-guided mechanism. At the Inter-Reliability level, Stream-R1 rescales each rollout's loss by an exponential of a pretrained video reward score, so that rollouts with reliable supervision dominate optimization. At the Intra-Perplexity level, it back-propagates the same reward model to extract per-pixel gradient saliency, which is factored into spatial and temporal weights that concentrate optimization pressure on regions and frames where refinement yields the largest expected gain. An adaptive balancing mechanism prevents any single quality axis from dominating across visual quality, motion quality, and text alignment. Stream-R1 attains consistent improvements on all three dimensions over distillation baselines on standard streaming video generation benchmarks, without architectural modification or additional inference cost.