ChatPaper.aiChatPaper

VideoCanvas:通过上下文条件化实现任意时空补丁的统一视频补全

VideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal Patches via In-Context Conditioning

October 9, 2025
作者: Minghong Cai, Qiulin Wang, Zongli Ye, Wenze Liu, Quande Liu, Weicai Ye, Xintao Wang, Pengfei Wan, Kun Gai, Xiangyu Yue
cs.AI

摘要

我们提出了任意时空视频补全任务,该任务旨在根据用户指定的任意空间位置和时间戳上的补丁生成视频,类似于在视频画布上作画。这一灵活的表述自然地将许多现有的可控视频生成任务——包括首帧图像到视频、修复、扩展和插值——统一在一个单一、连贯的范式之下。然而,实现这一愿景面临现代潜在视频扩散模型中的一个根本性障碍:因果变分自编码器(VAE)引入的时间模糊性,其中多个像素帧被压缩为单一的潜在表示,使得精确的帧级条件控制在结构上变得困难。我们通过VideoCanvas框架应对这一挑战,该框架将上下文内条件(ICC)范式适应于这一细粒度控制任务,且无需新增参数。我们提出了一种混合条件策略,将空间和时间控制解耦:空间布局通过零填充处理,而时间对齐则通过时间RoPE插值实现,该方法为每个条件在潜在序列中分配一个连续的小数位置。这解决了VAE的时间模糊性问题,并在冻结的骨干网络上实现了像素帧感知的控制。为了评估这一新能力,我们开发了VideoCanvasBench,这是首个针对任意时空视频补全的基准测试,涵盖了场景内保真度和场景间创造力。实验表明,VideoCanvas显著优于现有的条件范式,在灵活且统一的视频生成领域确立了新的技术前沿。
English
We introduce the task of arbitrary spatio-temporal video completion, where a video is generated from arbitrary, user-specified patches placed at any spatial location and timestamp, akin to painting on a video canvas. This flexible formulation naturally unifies many existing controllable video generation tasks--including first-frame image-to-video, inpainting, extension, and interpolation--under a single, cohesive paradigm. Realizing this vision, however, faces a fundamental obstacle in modern latent video diffusion models: the temporal ambiguity introduced by causal VAEs, where multiple pixel frames are compressed into a single latent representation, making precise frame-level conditioning structurally difficult. We address this challenge with VideoCanvas, a novel framework that adapts the In-Context Conditioning (ICC) paradigm to this fine-grained control task with zero new parameters. We propose a hybrid conditioning strategy that decouples spatial and temporal control: spatial placement is handled via zero-padding, while temporal alignment is achieved through Temporal RoPE Interpolation, which assigns each condition a continuous fractional position within the latent sequence. This resolves the VAE's temporal ambiguity and enables pixel-frame-aware control on a frozen backbone. To evaluate this new capability, we develop VideoCanvasBench, the first benchmark for arbitrary spatio-temporal video completion, covering both intra-scene fidelity and inter-scene creativity. Experiments demonstrate that VideoCanvas significantly outperforms existing conditioning paradigms, establishing a new state of the art in flexible and unified video generation.
PDF482October 10, 2025