二值不透明度网格:捕捉基于网格的视图合成的精细几何细节
Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
February 19, 2024
作者: Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger
cs.AI
摘要
尽管基于表面的视图合成算法由于其低计算需求而具有吸引力,但它们通常难以重现细小结构。相比之下,更昂贵的方法将场景的几何形状建模为体积密度场(例如NeRF),在重建精细几何细节方面表现出色。然而,密度场通常以“模糊”的方式表示几何形状,这会阻碍对表面的精确定位。在这项工作中,我们修改密度场以鼓励其向表面收敛,而不影响其重建细小结构的能力。首先,我们采用离散不透明度网格表示,而不是连续密度场,这使得不透明度值可以在表面处从零突变到一。其次,我们通过每像素投射多条射线进行抗锯齿处理,从而可以对遮挡边界和亚像素结构进行建模,而无需使用半透明体素。第三,我们最小化不透明度值的二进熵,通过鼓励不透明度值在训练结束时向二值化收敛,有助于提取表面几何形状。最后,我们开发了基于融合的网格化策略,随后进行网格简化和外观模型拟合。我们模型生成的紧凑网格可以在移动设备上实时渲染,并且与现有基于网格的方法相比,在视图合成质量上取得了显著提高。
English
While surface-based view synthesis algorithms are appealing due to their low
computational requirements, they often struggle to reproduce thin structures.
In contrast, more expensive methods that model the scene's geometry as a
volumetric density field (e.g. NeRF) excel at reconstructing fine geometric
detail. However, density fields often represent geometry in a "fuzzy" manner,
which hinders exact localization of the surface. In this work, we modify
density fields to encourage them to converge towards surfaces, without
compromising their ability to reconstruct thin structures. First, we employ a
discrete opacity grid representation instead of a continuous density field,
which allows opacity values to discontinuously transition from zero to one at
the surface. Second, we anti-alias by casting multiple rays per pixel, which
allows occlusion boundaries and subpixel structures to be modelled without
using semi-transparent voxels. Third, we minimize the binary entropy of the
opacity values, which facilitates the extraction of surface geometry by
encouraging opacity values to binarize towards the end of training. Lastly, we
develop a fusion-based meshing strategy followed by mesh simplification and
appearance model fitting. The compact meshes produced by our model can be
rendered in real-time on mobile devices and achieve significantly higher view
synthesis quality compared to existing mesh-based approaches.Summary
AI-Generated Summary