自動解碼潛在 3D 擴散模型
AutoDecoding Latent 3D Diffusion Models
July 7, 2023
作者: Evangelos Ntavelis, Aliaksandr Siarohin, Kyle Olszewski, Chaoyang Wang, Luc Van Gool, Sergey Tulyakov
cs.AI
摘要
我們提出了一種新穎的方法來生成靜態和關節式3D資產,其核心是一個3D自編碼器。這個3D自編碼器框架將從目標數據集中學習的屬性嵌入潛在空間,然後可以將其解碼為體積表示,以呈現視圖一致的外觀和幾何形狀。然後,我們識別適當的中間體積潛在空間,並引入強健的標準化和反標準化操作,以從二維圖像或單眼視頻中學習剛性或關節式物體的3D擴散。我們的方法足夠靈活,可以使用現有的相機監督或根本不使用相機信息,而是在訓練過程中高效地學習。我們的評估表明,我們的生成結果在各種基準數據集和指標上均優於最先進的替代方案,包括合成物體的多視圖圖像數據集、移動人物的真實野外視頻以及大規模的靜態物體真實視頻數據集。
English
We present a novel approach to the generation of static and articulated 3D
assets that has a 3D autodecoder at its core. The 3D autodecoder framework
embeds properties learned from the target dataset in the latent space, which
can then be decoded into a volumetric representation for rendering
view-consistent appearance and geometry. We then identify the appropriate
intermediate volumetric latent space, and introduce robust normalization and
de-normalization operations to learn a 3D diffusion from 2D images or monocular
videos of rigid or articulated objects. Our approach is flexible enough to use
either existing camera supervision or no camera information at all -- instead
efficiently learning it during training. Our evaluations demonstrate that our
generation results outperform state-of-the-art alternatives on various
benchmark datasets and metrics, including multi-view image datasets of
synthetic objects, real in-the-wild videos of moving people, and a large-scale,
real video dataset of static objects.