ChatPaper.aiChatPaper

Uni-ViGU:基於擴散模型的影片生成器實現統一影片生成與理解

Uni-ViGU: Towards Unified Video Generation and Understanding via A Diffusion-Based Video Generator

April 9, 2026
作者: Luozheng Qin, Jia Gong, Qian Qiao, Tianjiao Li, Li Xu, Haoyu Pan, Chao Qu, Zhiyu Tan, Hao Li
cs.AI

摘要

整合視覺理解與生成的統一多模態模型面臨一個根本性挑戰:視覺生成(尤其是影片)的計算成本遠高於理解。這種不平衡促使我們顛覆傳統範式:與其擴展以理解為核心的多模態大語言模型來支持生成,我們提出Uni-ViGU框架,通過擴展影片生成器作為基礎來統一影片生成與理解。我們引入統一流匹配方法,在單一流程中對影片執行連續流匹配、對文本執行離散流匹配,實現連貫的多模態生成。進一步提出基於模態驅動的混合專家框架,通過輕量級文本生成層增強Transformer塊,同時保留生成先驗。為將生成知識重定向用於理解,我們設計雙向訓練機制:知識回溯階段通過重構輸入提示來利用已學習的文本-影片對應關係,能力精煉階段則通過細粒度標註微調來建立區分性共享表徵。實驗表明Uni-ViGU在影片生成與理解任務上均達到競爭性性能,驗證了以生成為中心的架構可作為實現統一多模態智能的可擴展路徑。項目頁面與代碼:https://fr0zencrane.github.io/uni-vigu-page/。
English
Unified multimodal models integrating visual understanding and generation face a fundamental challenge: visual generation incurs substantially higher computational costs than understanding, particularly for video. This imbalance motivates us to invert the conventional paradigm: rather than extending understanding-centric MLLMs to support generation, we propose Uni-ViGU, a framework that unifies video generation and understanding by extending a video generator as the foundation. We introduce a unified flow method that performs continuous flow matching for video and discrete flow matching for text within a single process, enabling coherent multimodal generation. We further propose a modality-driven MoE-based framework that augments Transformer blocks with lightweight layers for text generation while preserving generative priors. To repurpose generation knowledge for understanding, we design a bidirectional training mechanism with two stages: Knowledge Recall reconstructs input prompts to leverage learned text-video correspondences, while Capability Refinement fine-tunes on detailed captions to establish discriminative shared representations. Experiments demonstrate that Uni-ViGU achieves competitive performance on both video generation and understanding, validating generation-centric architectures as a scalable path toward unified multimodal intelligence. Project Page and Code: https://fr0zencrane.github.io/uni-vigu-page/.
PDF392April 15, 2026