ChatPaper.aiChatPaper

基於單張影像的4D合成:三維幾何重建與運動生成聯合方法

Joint 3D Geometry Reconstruction and Motion Generation for 4D Synthesis from a Single Image

December 4, 2025
作者: Yanran Zhang, Ziyi Wang, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu
cs.AI

摘要

從單張靜態影像生成互動式動態四維場景仍是核心挑戰。現有多數「先生成後重建」與「先重建後生成」方法將幾何與運動解耦,導致時空不一致性與泛化能力不足。為解決這些問題,我們擴展「先重建後生成」框架,提出聯合執行運動生成與幾何重建的四維合成方法(MoRe4D)。我們首先構建TrajScene-60K數據集——包含6萬個具密集點軌跡的影片樣本,以緩解高質量四維場景數據稀缺問題。基於此,我們提出基於擴散模型的四維場景軌跡生成器(4D-STraG),能聯合生成幾何一致且運動合理的四維點軌跡。為利用單視圖先驗,我們設計深度引導的運動歸一化策略與運動感知模塊,實現有效的幾何與動態整合。隨後提出四維視圖合成模塊(4D-ViSM),可從四維點軌跡表徵渲染任意相機軌跡的影片。實驗表明,MoRe4D能從單張影像生成具多視圖一致性與豐富動態細節的高質量四維場景。代碼已開源於:https://github.com/Zhangyr2022/MoRe4D。
English
Generating interactive and dynamic 4D scenes from a single static image remains a core challenge. Most existing generate-then-reconstruct and reconstruct-then-generate methods decouple geometry from motion, causing spatiotemporal inconsistencies and poor generalization. To address these, we extend the reconstruct-then-generate framework to jointly perform Motion generation and geometric Reconstruction for 4D Synthesis (MoRe4D). We first introduce TrajScene-60K, a large-scale dataset of 60,000 video samples with dense point trajectories, addressing the scarcity of high-quality 4D scene data. Based on this, we propose a diffusion-based 4D Scene Trajectory Generator (4D-STraG) to jointly generate geometrically consistent and motion-plausible 4D point trajectories. To leverage single-view priors, we design a depth-guided motion normalization strategy and a motion-aware module for effective geometry and dynamics integration. We then propose a 4D View Synthesis Module (4D-ViSM) to render videos with arbitrary camera trajectories from 4D point track representations. Experiments show that MoRe4D generates high-quality 4D scenes with multi-view consistency and rich dynamic details from a single image. Code: https://github.com/Zhangyr2022/MoRe4D.
PDF152December 9, 2025