SV3D:從單張圖像中使用潛在視頻擴散進行新穎的多視角合成和3D生成。
SV3D: Novel Multi-view Synthesis and 3D Generation from a Single Image using Latent Video Diffusion
March 18, 2024
作者: Vikram Voleti, Chun-Han Yao, Mark Boss, Adam Letts, David Pankratz, Dmitry Tochilkin, Christian Laforte, Robin Rombach, Varun Jampani
cs.AI
摘要
我們提出了穩定影片3D(SV3D)- 一種潛在影片擴散模型,用於高解析度的圖像到多視角生成環繞3D物體的軌道影片。最近關於3D生成的研究提出了技術,以適應2D生成模型進行新視角合成(NVS)和3D優化。然而,這些方法由於視角有限或NVS不一致而存在一些缺點,進而影響3D物體生成的性能。在這項工作中,我們提出了SV3D,該模型適應圖像到影片擴散模型,用於新多視角合成和3D生成,從而利用影片模型的泛化和多視角一致性,同時進一步增加了對NVS的明確攝像頭控制。我們還提出了改進的3D優化技術,以使用SV3D及其NVS輸出進行圖像到3D生成。在多個數據集上進行的廣泛實驗結果,包括2D和3D指標以及用戶研究,證明了SV3D在NVS以及3D重建方面相對於先前作品的最新性能。
English
We present Stable Video 3D (SV3D) -- a latent video diffusion model for
high-resolution, image-to-multi-view generation of orbital videos around a 3D
object. Recent work on 3D generation propose techniques to adapt 2D generative
models for novel view synthesis (NVS) and 3D optimization. However, these
methods have several disadvantages due to either limited views or inconsistent
NVS, thereby affecting the performance of 3D object generation. In this work,
we propose SV3D that adapts image-to-video diffusion model for novel multi-view
synthesis and 3D generation, thereby leveraging the generalization and
multi-view consistency of the video models, while further adding explicit
camera control for NVS. We also propose improved 3D optimization techniques to
use SV3D and its NVS outputs for image-to-3D generation. Extensive experimental
results on multiple datasets with 2D and 3D metrics as well as user study
demonstrate SV3D's state-of-the-art performance on NVS as well as 3D
reconstruction compared to prior works.Summary
AI-Generated Summary