ChatPaper.aiChatPaper

MoVieS:一秒內實現運動感知的四維動態視圖合成

MoVieS: Motion-Aware 4D Dynamic View Synthesis in One Second

July 14, 2025
作者: Chenguo Lin, Yuchen Lin, Panwang Pan, Yifan Yu, Honglei Yan, Katerina Fragkiadaki, Yadong Mu
cs.AI

摘要

我們提出MoVieS,這是一種新穎的前饋模型,能夠在一秒內從單目視頻合成四維動態新視角。MoVieS利用像素對齊的高斯基元網格來表示動態三維場景,並對其時變運動進行顯式監督。這首次實現了外觀、幾何與運動的統一建模,並在單一學習框架內支持視角合成、重建及三維點追蹤。通過將新視角合成與動態幾何重建相結合,MoVieS能夠在多樣化數據集上進行大規模訓練,且對任務特定監督的依賴極小。因此,它自然支持廣泛的零樣本應用,如場景流估計和運動物體分割。大量實驗驗證了MoVieS在多任務中的有效性和效率,在保持競爭性能的同時,實現了數個數量級的加速。
English
We present MoVieS, a novel feed-forward model that synthesizes 4D dynamic novel views from monocular videos in one second. MoVieS represents dynamic 3D scenes using pixel-aligned grids of Gaussian primitives, explicitly supervising their time-varying motion. This allows, for the first time, the unified modeling of appearance, geometry and motion, and enables view synthesis, reconstruction and 3D point tracking within a single learning-based framework. By bridging novel view synthesis with dynamic geometry reconstruction, MoVieS enables large-scale training on diverse datasets with minimal dependence on task-specific supervision. As a result, it also naturally supports a wide range of zero-shot applications, such as scene flow estimation and moving object segmentation. Extensive experiments validate the effectiveness and efficiency of MoVieS across multiple tasks, achieving competitive performance while offering several orders of magnitude speedups.
PDF133July 15, 2025