ChatPaper.aiChatPaper

Mesh2NeRF:神经辐射场表示和生成的直接网格监督

Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation

March 28, 2024
作者: Yujin Chen, Yinyu Nie, Benjamin Ummenhofer, Reiner Birkl, Michael Paulitsch, Matthias Müller, Matthias Nießner
cs.AI

摘要

我们提出了Mesh2NeRF,这是一种从带纹理的网格中推导出真实辐射场用于3D生成任务的方法。许多3D生成方法将3D场景表示为辐射场进行训练。它们的真实辐射场通常是从大规模合成的3D数据集的多视角渲染中拟合而来,这经常会因遮挡或拟合不足问题而产生伪影。在Mesh2NeRF中,我们提出了一种分析解来直接从3D网格中获取真实辐射场,通过具有定义表面厚度的占据函数来表征密度场,并通过考虑网格和环境光照的反射函数来确定视角相关的颜色。Mesh2NeRF提取准确的辐射场,为训练生成式NeRF和单场景表示提供直接监督。我们验证了Mesh2NeRF在各种任务中的有效性,在ABO数据集的单场景表示中实现了PSNR的显著提高,视图合成方面提高了3.12dB,在ShapeNet Cars的单视图条件生成中提高了0.69 PSNR,并在Objaverse Mugs的无条件生成中显著改善了从NeRF中提取网格的效果。
English
We present Mesh2NeRF, an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks. Many 3D generative approaches represent 3D scenes as radiance fields for training. Their ground-truth radiance fields are usually fitted from multi-view renderings from a large-scale synthetic 3D dataset, which often results in artifacts due to occlusions or under-fitting issues. In Mesh2NeRF, we propose an analytic solution to directly obtain ground-truth radiance fields from 3D meshes, characterizing the density field with an occupancy function featuring a defined surface thickness, and determining view-dependent color through a reflection function considering both the mesh and environment lighting. Mesh2NeRF extracts accurate radiance fields which provides direct supervision for training generative NeRFs and single scene representation. We validate the effectiveness of Mesh2NeRF across various tasks, achieving a noteworthy 3.12dB improvement in PSNR for view synthesis in single scene representation on the ABO dataset, a 0.69 PSNR enhancement in the single-view conditional generation of ShapeNet Cars, and notably improved mesh extraction from NeRF in the unconditional generation of Objaverse Mugs.

Summary

AI-Generated Summary

PDF141December 15, 2024