ChatPaper.aiChatPaper

隱形縫合:利用深度修補生成平滑的3D場景

Invisible Stitch: Generating Smooth 3D Scenes with Depth Inpainting

April 30, 2024
作者: Paul Engstler, Andrea Vedaldi, Iro Laina, Christian Rupprecht
cs.AI

摘要

3D場景生成迅速成為一個具有挑戰性的新研究方向,這得益於2D生成擴散模型的持續改進。在這個領域的大部分先前工作通過迭代地將新生成的幀與現有幾何圖形拼接來生成場景。這些作品通常依賴於預訓練的單眼深度估計器將生成的圖像提升到3D,將它們與現有場景表示融合。然後通常通過文本度量來評估這些方法,測量生成的圖像與給定文本提示之間的相似性。在這項工作中,我們對3D場景生成領域做出了兩個基本貢獻。首先,我們指出使用單眼深度估計模型將圖像提升到3D是次優的,因為它忽略了現有場景的幾何形狀。因此,我們引入了一種新穎的深度完成模型,通過教師蒸餾和自我訓練來訓練,以學習3D融合過程,從而提高場景的幾何一致性。其次,我們引入了一種基於地面真實幾何的場景生成方法的新基準方案,因此可以衡量場景結構的質量。
English
3D scene generation has quickly become a challenging new research direction, fueled by consistent improvements of 2D generative diffusion models. Most prior work in this area generates scenes by iteratively stitching newly generated frames with existing geometry. These works often depend on pre-trained monocular depth estimators to lift the generated images into 3D, fusing them with the existing scene representation. These approaches are then often evaluated via a text metric, measuring the similarity between the generated images and a given text prompt. In this work, we make two fundamental contributions to the field of 3D scene generation. First, we note that lifting images to 3D with a monocular depth estimation model is suboptimal as it ignores the geometry of the existing scene. We thus introduce a novel depth completion model, trained via teacher distillation and self-training to learn the 3D fusion process, resulting in improved geometric coherence of the scene. Second, we introduce a new benchmarking scheme for scene generation methods that is based on ground truth geometry, and thus measures the quality of the structure of the scene.

Summary

AI-Generated Summary

PDF121December 8, 2024