自动解码潜在的3D扩散模型
AutoDecoding Latent 3D Diffusion Models
July 7, 2023
作者: Evangelos Ntavelis, Aliaksandr Siarohin, Kyle Olszewski, Chaoyang Wang, Luc Van Gool, Sergey Tulyakov
cs.AI
摘要
我们提出了一种新颖的方法来生成静态和关节式3D资产,其核心是一个3D自解码器。3D自解码器框架将从目标数据集中学习到的属性嵌入潜在空间,然后可以解码为体积表示,以渲染视图一致的外观和几何形状。然后,我们确定了适当的中间体积潜在空间,并引入了强大的归一化和反归一化操作,以从刚性或关节式物体的2D图像或单眼视频中学习3D扩散。我们的方法足够灵活,可以使用现有的相机监督或根本不使用相机信息,而是在训练期间高效地学习它。我们的评估表明,我们的生成结果在各种基准数据集和指标上优于最先进的替代方案,包括合成物体的多视图图像数据集、真实野外移动人物视频以及大规模的静态物体真实视频数据集。
English
We present a novel approach to the generation of static and articulated 3D
assets that has a 3D autodecoder at its core. The 3D autodecoder framework
embeds properties learned from the target dataset in the latent space, which
can then be decoded into a volumetric representation for rendering
view-consistent appearance and geometry. We then identify the appropriate
intermediate volumetric latent space, and introduce robust normalization and
de-normalization operations to learn a 3D diffusion from 2D images or monocular
videos of rigid or articulated objects. Our approach is flexible enough to use
either existing camera supervision or no camera information at all -- instead
efficiently learning it during training. Our evaluations demonstrate that our
generation results outperform state-of-the-art alternatives on various
benchmark datasets and metrics, including multi-view image datasets of
synthetic objects, real in-the-wild videos of moving people, and a large-scale,
real video dataset of static objects.