ChatPaper.aiChatPaper

流式R1:面向流式视频生成的可靠性-困惑度感知奖励蒸馏方法

Stream-R1: Reliability-Perplexity Aware Reward Distillation for Streaming Video Generation

May 5, 2026
作者: Bin Wu, Mengqi Huang, Shaojin Wu, Weinan Jia, Yuxin Wang, Zhendong Mao, Yongdong Zhang
cs.AI

摘要

基于蒸馏的加速技术已成为实现自回归流式视频扩散模型实用化的基础,其中分布匹配蒸馏(DMD)已成为事实标准。然而现有方法 indiscriminately 训练学生模型以匹配教师输出,将每个推演序列、帧和像素视为同等可靠的监督信号。我们认为这种做法限制了蒸馏质量的上限,因为它忽略了DMD监督中两个互补的方差维度:跨推演序列的互可靠性——不同学生推演序列的监督可靠性存在差异,以及空间区域与时序帧间的内部困惑度——各区域对质量提升的贡献度并不均衡。该目标函数因而在统一权重下混淆了两个关键问题:是否从每个推演序列中学习,以及如何在序列内聚焦优化区域。为此,我们提出Stream-R1——一种可靠性-困惑度感知的奖励蒸馏框架,通过共享的奖励引导机制在推演序列和时空元素两个层级自适应调整蒸馏目标权重。在互可靠性层面,Stream-R1通过预训练视频奖励分数的指数函数重新缩放每个推演序列的损失,使具有可靠监督的推演序列主导优化过程。在内部困惑度层面,该框架对同一奖励模型进行反向传播以提取像素级梯度显著性,并将其分解为空间权重和时间权重,从而将优化压力集中于能带来最大预期增益的区域和帧。自适应平衡机制可防止视觉质量、运动质量和文本对齐这三个质量维度中的任一维度过度主导优化。在标准流式视频生成基准测试中,Stream-R1无需修改架构或增加推理成本,即可在蒸馏基线基础上实现三个维度的同步提升。
English
Distillation-based acceleration has become foundational for making autoregressive streaming video diffusion models practical, with distribution matching distillation (DMD) as the de facto choice. Existing methods, however, train the student to match the teacher's output indiscriminately, treating every rollout, frame, and pixel as equally reliable supervision. We argue that this caps distilled quality, since it overlooks two complementary axes of variance in DMD supervision: Inter-Reliability across student rollouts whose supervision varies in reliability, and Intra-Perplexity across spatial regions and temporal frames that contribute unequally to where quality can still be improved. The objective thus conflates two questions under a uniform weight: whether to learn from each rollout, and where to concentrate optimization within it. To address this, we propose Stream-R1, a Reliability-Perplexity Aware Reward Distillation framework that adaptively reweights the distillation objective at both rollout and spatiotemporal-element levels through a single shared reward-guided mechanism. At the Inter-Reliability level, Stream-R1 rescales each rollout's loss by an exponential of a pretrained video reward score, so that rollouts with reliable supervision dominate optimization. At the Intra-Perplexity level, it back-propagates the same reward model to extract per-pixel gradient saliency, which is factored into spatial and temporal weights that concentrate optimization pressure on regions and frames where refinement yields the largest expected gain. An adaptive balancing mechanism prevents any single quality axis from dominating across visual quality, motion quality, and text alignment. Stream-R1 attains consistent improvements on all three dimensions over distillation baselines on standard streaming video generation benchmarks, without architectural modification or additional inference cost.
PDF1081May 8, 2026