DORSal:面向场景物体中心表示的扩散 等。
DORSal: Diffusion for Object-centric Representations of Scenes et al.
June 13, 2023
作者: Allan Jabri, Sjoerd van Steenkiste, Emiel Hoogeboom, Mehdi S. M. Sajjadi, Thomas Kipf
cs.AI
摘要
最近在3D场景理解方面取得的进展使得能够跨大量不同场景数据集学习表示变得可扩展。因此,对未见过的场景和物体进行泛化,仅从一张或少数几张输入图像渲染新视角,以及支持编辑的可控场景生成现在成为可能。然而,通常在大量场景上联合训练会牺牲渲染质量,与NeRFs等针对单个场景优化的模型相比。在本文中,我们利用扩散模型的最新进展,赋予3D场景表示学习模型渲染高保真新视角的能力,同时保留诸如对象级场景编辑等好处。具体来说,我们提出了DORSal,它将视频扩散架构应用于基于对象中心槽位表示的场景的3D场景生成。在复杂的合成多对象场景和实际的大规模街景数据集上,我们展示了DORSal实现了可扩展的神经渲染3D场景,具有对象级编辑,并改进了现有方法。
English
Recent progress in 3D scene understanding enables scalable learning of
representations across large datasets of diverse scenes. As a consequence,
generalization to unseen scenes and objects, rendering novel views from just a
single or a handful of input images, and controllable scene generation that
supports editing, is now possible. However, training jointly on a large number
of scenes typically compromises rendering quality when compared to single-scene
optimized models such as NeRFs. In this paper, we leverage recent progress in
diffusion models to equip 3D scene representation learning models with the
ability to render high-fidelity novel views, while retaining benefits such as
object-level scene editing to a large degree. In particular, we propose DORSal,
which adapts a video diffusion architecture for 3D scene generation conditioned
on object-centric slot-based representations of scenes. On both complex
synthetic multi-object scenes and on the real-world large-scale Street View
dataset, we show that DORSal enables scalable neural rendering of 3D scenes
with object-level editing and improves upon existing approaches.