ChatPaper.aiChatPaper

块级联:无需训练的块因果视频模型加速方法

Block Cascading: Training Free Acceleration of Block-Causal Video Models

November 25, 2025
作者: Hmrishav Bandyopadhyay, Nikhil Pinnaparaju, Rahim Entezari, Jim Scott, Yi-Zhe Song, Varun Jampani
cs.AI

摘要

區塊因果視頻生成面臨嚴峻的速度-質量權衡:13億參數的小型模型僅能實現16 FPS,而140億參數的大型模型更是低至4.5 FPS,迫使用戶在響應速度與生成質量之間抉擇。區塊級聯技術通過免訓練的並行化方案顯著緩解了這一矛盾。我們的核心發現是:後續視頻區塊的生成無需等待前置區塊完全去噪。通過利用部分去噪的上下文信息啟動區塊生成,我們將序列化流程轉變為多區塊同步去噪的並行級聯架構。在5張GPU的時域並行優化下,所有模型規模均實現約2倍加速:13億模型從16 FPS提升至30 FPS,140億模型從4.5 FPS躍升至12.5 FPS。除推理速度提升外,區塊級聯還消除了交互生成中上下文切換時約200毫秒的KV緩存重計算開銷。針對多種區塊因果流程的廣泛評估表明,從區塊因果切換至區塊級聯推理時,生成質量未出現顯著損失。項目頁面:https://hmrishavbandy.github.io/block_cascading_page/
English
Block-causal video generation faces a stark speed-quality trade-off: small 1.3B models manage only 16 FPS while large 14B models crawl at 4.5 FPS, forcing users to choose between responsiveness and quality. Block Cascading significantly mitigates this trade-off through training-free parallelization. Our key insight: future video blocks do not need fully denoised current blocks to begin generation. By starting block generation with partially denoised context from predecessors, we transform sequential pipelines into parallel cascades where multiple blocks denoise simultaneously. With 5 GPUs exploiting temporal parallelism, we achieve ~2x acceleration across all model scales: 1.3B models accelerate from 16 to 30 FPS, 14B models from 4.5 to 12.5 FPS. Beyond inference speed, Block Cascading eliminates overhead from KV-recaching (of ~200ms) during context switches for interactive generation. Extensive evaluations validated against multiple block-causal pipelines demonstrate no significant loss in generation quality when switching from block-causal to Block Cascading pipelines for inference. Project Page: https://hmrishavbandy.github.io/block_cascading_page/
PDF74December 1, 2025