ChatPaper.aiChatPaper

视角文本反转:利用预训练的2D扩散模型释放新颖的视图合成

Viewpoint Textual Inversion: Unleashing Novel View Synthesis with Pretrained 2D Diffusion Models

September 14, 2023
作者: James Burgess, Kuan-Chieh Wang, Serena Yeung
cs.AI

摘要

文本到图像扩散模型理解物体之间的空间关系,但它们能否仅通过2D监督表示世界的真实3D结构?我们证明了是的,3D知识被编码在诸如稳定扩散(Stable Diffusion)之类的2D图像扩散模型中,并且我们展示这种结构可以被用于3D视觉任务。我们的方法,视角神经文本反演(ViewNeTI),控制从冻结扩散模型生成的图像中物体的3D视角。我们训练一个小型神经映射器,以获取摄像机视角参数并预测文本编码器潜变量;然后这些潜变量会调节扩散生成过程,以产生具有所需摄像机视角的图像。 ViewNeTI自然地解决了新颖视角合成(NVS)问题。通过利用冻结扩散模型作为先验,我们可以用极少的输入视图解决NVS问题;我们甚至可以进行单视角新颖视角合成。与先前的方法相比,我们的单视角NVS预测具有良好的语义细节和照片级逼真度。我们的方法非常适合建模稀疏3D视觉问题中固有的不确定性,因为它可以高效生成多样化的样本。我们的视角控制机制是通用的,甚至可以在由用户定义的提示生成的图像中改变摄像机视角。
English
Text-to-image diffusion models understand spatial relationship between objects, but do they represent the true 3D structure of the world from only 2D supervision? We demonstrate that yes, 3D knowledge is encoded in 2D image diffusion models like Stable Diffusion, and we show that this structure can be exploited for 3D vision tasks. Our method, Viewpoint Neural Textual Inversion (ViewNeTI), controls the 3D viewpoint of objects in generated images from frozen diffusion models. We train a small neural mapper to take camera viewpoint parameters and predict text encoder latents; the latents then condition the diffusion generation process to produce images with the desired camera viewpoint. ViewNeTI naturally addresses Novel View Synthesis (NVS). By leveraging the frozen diffusion model as a prior, we can solve NVS with very few input views; we can even do single-view novel view synthesis. Our single-view NVS predictions have good semantic details and photorealism compared to prior methods. Our approach is well suited for modeling the uncertainty inherent in sparse 3D vision problems because it can efficiently generate diverse samples. Our view-control mechanism is general, and can even change the camera view in images generated by user-defined prompts.
PDF41December 15, 2024