Mesh2NeRF:神經輻射場表示和生成的直接網格監督
Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation
March 28, 2024
作者: Yujin Chen, Yinyu Nie, Benjamin Ummenhofer, Reiner Birkl, Michael Paulitsch, Matthias Müller, Matthias Nießner
cs.AI
摘要
我們提出了Mesh2NeRF,這是一種從帶紋理網格中推導出真實輻射場的方法,用於3D生成任務。許多3D生成方法將3D場景表示為用於訓練的輻射場。它們的真實輻射場通常是從大規模合成的3D數據集的多視圖渲染中擬合而來,這往往會因遮擋或擬合不足問題而產生藝術品。在Mesh2NeRF中,我們提出了一種解析解,可以直接從3D網格中獲取真實輻射場,通過使用具有定義表面厚度的佔用函數來表徵密度場,並通過考慮網格和環境照明的反射函數來確定視角相關的顏色。Mesh2NeRF提取出準確的輻射場,為訓練生成式NeRF和單個場景表示提供直接監督。我們驗證了Mesh2NeRF在各種任務中的有效性,在ABO數據集的單個場景表示中,視圖合成的PSNR實現了顯著的3.12dB改善,在ShapeNet Cars的單視圖條件生成中提高了0.69 PSNR,在Objaverse Mugs的無條件生成中明顯改善了從NeRF中提取網格。
English
We present Mesh2NeRF, an approach to derive ground-truth radiance fields from
textured meshes for 3D generation tasks. Many 3D generative approaches
represent 3D scenes as radiance fields for training. Their ground-truth
radiance fields are usually fitted from multi-view renderings from a
large-scale synthetic 3D dataset, which often results in artifacts due to
occlusions or under-fitting issues. In Mesh2NeRF, we propose an analytic
solution to directly obtain ground-truth radiance fields from 3D meshes,
characterizing the density field with an occupancy function featuring a defined
surface thickness, and determining view-dependent color through a reflection
function considering both the mesh and environment lighting. Mesh2NeRF extracts
accurate radiance fields which provides direct supervision for training
generative NeRFs and single scene representation. We validate the effectiveness
of Mesh2NeRF across various tasks, achieving a noteworthy 3.12dB improvement in
PSNR for view synthesis in single scene representation on the ABO dataset, a
0.69 PSNR enhancement in the single-view conditional generation of ShapeNet
Cars, and notably improved mesh extraction from NeRF in the unconditional
generation of Objaverse Mugs.Summary
AI-Generated Summary