DC-VideoGen:基於深度壓縮視頻自動編碼器的高效視頻生成
DC-VideoGen: Efficient Video Generation with Deep Compression Video Autoencoder
September 29, 2025
作者: Junyu Chen, Wenkun He, Yuchao Gu, Yuyang Zhao, Jincheng Yu, Junsong Chen, Dongyun Zou, Yujun Lin, Zhekai Zhang, Muyang Li, Haocheng Xi, Ligeng Zhu, Enze Xie, Song Han, Han Cai
cs.AI
摘要
我們介紹了DC-VideoGen,這是一個用於高效視頻生成的訓練後加速框架。DC-VideoGen可應用於任何預訓練的視頻擴散模型,通過將其適應於深度壓縮的潛在空間並進行輕量級微調來提升效率。該框架基於兩項關鍵創新:(i) 深度壓縮視頻自動編碼器,採用新穎的塊因果時間設計,在保持重建質量和對更長視頻的泛化能力的同時,實現了32倍/64倍的空間壓縮和4倍的時間壓縮;以及(ii) AE-Adapt-V,一種穩健的適應策略,能夠快速且穩定地將預訓練模型轉移到新的潛在空間。使用DC-VideoGen對預訓練的Wan-2.1-14B模型進行適應,僅需在NVIDIA H100 GPU上花費10個GPU天。加速後的模型在不影響質量的情況下,推理延遲最多降低了14.8倍,並進一步實現了在單個GPU上生成2160x3840分辨率視頻的能力。代碼:https://github.com/dc-ai-projects/DC-VideoGen。
English
We introduce DC-VideoGen, a post-training acceleration framework for
efficient video generation. DC-VideoGen can be applied to any pre-trained video
diffusion model, improving efficiency by adapting it to a deep compression
latent space with lightweight fine-tuning. The framework builds on two key
innovations: (i) a Deep Compression Video Autoencoder with a novel chunk-causal
temporal design that achieves 32x/64x spatial and 4x temporal compression while
preserving reconstruction quality and generalization to longer videos; and (ii)
AE-Adapt-V, a robust adaptation strategy that enables rapid and stable transfer
of pre-trained models into the new latent space. Adapting the pre-trained
Wan-2.1-14B model with DC-VideoGen requires only 10 GPU days on the NVIDIA H100
GPU. The accelerated models achieve up to 14.8x lower inference latency than
their base counterparts without compromising quality, and further enable
2160x3840 video generation on a single GPU. Code:
https://github.com/dc-ai-projects/DC-VideoGen.