SS4D:基于结构化时空潜变量的原生4D生成模型
SS4D: Native 4D Generative Model via Structured Spacetime Latents
December 16, 2025
作者: Zhibing Li, Mengchen Zhang, Tong Wu, Jing Tan, Jiaqi Wang, Dahua Lin
cs.AI
摘要
我们提出SS4D——一个原生4D生成模型,能够直接从单目视频中合成动态3D物体。与先前通过优化3D或视频生成模型来构建4D表示的方法不同,我们直接在4D数据上训练生成器,实现了高保真度、时间连贯性和结构一致性。我们方法的核心是一组压缩的结构化时空潜变量。具体而言:(1)针对4D训练数据稀缺的问题,我们在预训练的单图像转3D模型基础上构建,保持了强大的空间一致性;(2)通过引入专用于跨帧推理的时间层来强化时间连贯性;(3)为支持长视频序列的高效训练与推理,我们采用因子分解的4D卷积和时间下采样模块对潜变量序列进行时间轴压缩。此外,我们还采用精心设计的训练策略来增强对遮挡的鲁棒性。
English
We present SS4D, a native 4D generative model that synthesizes dynamic 3D objects directly from monocular video. Unlike prior approaches that construct 4D representations by optimizing over 3D or video generative models, we train a generator directly on 4D data, achieving high fidelity, temporal coherence, and structural consistency. At the core of our method is a compressed set of structured spacetime latents. Specifically, (1) To address the scarcity of 4D training data, we build on a pre-trained single-image-to-3D model, preserving strong spatial consistency. (2) Temporal consistency is enforced by introducing dedicated temporal layers that reason across frames. (3) To support efficient training and inference over long video sequences, we compress the latent sequence along the temporal axis using factorized 4D convolutions and temporal downsampling blocks. In addition, we employ a carefully designed training strategy to enhance robustness against occlusion