VINO:基於交錯式全模態上下文的統一視覺生成器
VINO: A Unified Visual Generator with Interleaved OmniModal Context
January 5, 2026
作者: Junyi Chen, Tong He, Zhoujie Fu, Pengfei Wan, Kun Gai, Weicai Ye
cs.AI
摘要
我們提出VINO——一個統一的視覺生成器,能在單一框架內實現圖像與視頻的生成與編輯。有別於依賴針對特定任務的模型或獨立模組處理不同模態,VINO採用共享的擴散模型骨幹,使其能同時以文本、圖像和視頻作為條件輸入,從而在單一模型下支援廣泛的視覺創作與編輯任務。具體而言,VINO將視覺語言模型(VLM)與多模態擴散轉換器(MMDiT)相結合,將多模態輸入編碼為交錯的條件標記,進而引導擴散過程。此設計支援多參考基礎錨定、長指令序列跟蹤,以及靜態與動態內容間的一致性身份保持,同時避免引入模態專用的架構組件。為訓練此統一系統,我們設計了多階段訓練流程,逐步將基礎視頻生成模型擴展為能同時處理圖像與視頻輸入輸出的統一多任務生成器。在各類生成與編輯基準測試中,VINO展現出卓越的視覺品質、精準的指令跟隨能力、改進的參考與屬性保持效果,以及更可控的多身份編輯能力。我們的成果揭示了可擴展統一視覺生成的可行路徑,並驗證了交錯式上下文計算作為通用視覺創作基礎的潛力。
English
We present VINO, a unified visual generator that performs image and video generation and editing within a single framework. Instead of relying on task-specific models or independent modules for each modality, VINO uses a shared diffusion backbone that conditions on text, images and videos, enabling a broad range of visual creation and editing tasks under one model. Specifically, VINO couples a vision-language model (VLM) with a Multimodal Diffusion Transformer (MMDiT), where multimodal inputs are encoded as interleaved conditioning tokens, and then used to guide the diffusion process. This design supports multi-reference grounding, long-form instruction following, and coherent identity preservation across static and dynamic content, while avoiding modality-specific architectural components. To train such a unified system, we introduce a multi-stage training pipeline that progressively expands a video generation base model into a unified, multi-task generator capable of both image and video input and output. Across diverse generation and editing benchmarks, VINO demonstrates strong visual quality, faithful instruction following, improved reference and attribute preservation, and more controllable multi-identity edits. Our results highlight a practical path toward scalable unified visual generation, and the promise of interleaved, in-context computation as a foundation for general-purpose visual creation.