二值不透明度網格:捕捉網格為基礎的視角合成的精細幾何細節
Binary Opacity Grids: Capturing Fine Geometric Detail for Mesh-Based View Synthesis
February 19, 2024
作者: Christian Reiser, Stephan Garbin, Pratul P. Srinivasan, Dor Verbin, Richard Szeliski, Ben Mildenhall, Jonathan T. Barron, Peter Hedman, Andreas Geiger
cs.AI
摘要
儘管基於表面的視角合成演算法因其低計算需求而具吸引力,但往往難以重現細小結構。相較之下,那些將場景幾何建模為體積密度場的更昂貴方法(例如 NeRF)在重建精細幾何細節方面表現出色。然而,密度場通常以「模糊」方式表示幾何,這妨礙了對表面的精確定位。在這項工作中,我們修改密度場以鼓勵其朝向表面收斂,同時不損害其重建細小結構的能力。首先,我們採用離散不透明度網格表示法,而非連續密度場,使不透明度值能在表面處零到一間不連續過渡。其次,我們進行反鋸齒處理,每像素投射多條射線,從而模擬遮蔽邊界和次像素結構,而無需使用半透明的體素。第三,我們最小化不透明度值的二元熵,透過鼓勵不透明度值在訓練結束時向二元化收斂,有助於提取表面幾何。最後,我們發展了基於融合的網格化策略,隨後進行網格簡化和外觀模型擬合。我們模型生成的緊湊網格能夠在移動設備上實時渲染,並且在視角合成質量方面相比現有基於網格的方法實現了顯著提升。
English
While surface-based view synthesis algorithms are appealing due to their low
computational requirements, they often struggle to reproduce thin structures.
In contrast, more expensive methods that model the scene's geometry as a
volumetric density field (e.g. NeRF) excel at reconstructing fine geometric
detail. However, density fields often represent geometry in a "fuzzy" manner,
which hinders exact localization of the surface. In this work, we modify
density fields to encourage them to converge towards surfaces, without
compromising their ability to reconstruct thin structures. First, we employ a
discrete opacity grid representation instead of a continuous density field,
which allows opacity values to discontinuously transition from zero to one at
the surface. Second, we anti-alias by casting multiple rays per pixel, which
allows occlusion boundaries and subpixel structures to be modelled without
using semi-transparent voxels. Third, we minimize the binary entropy of the
opacity values, which facilitates the extraction of surface geometry by
encouraging opacity values to binarize towards the end of training. Lastly, we
develop a fusion-based meshing strategy followed by mesh simplification and
appearance model fitting. The compact meshes produced by our model can be
rendered in real-time on mobile devices and achieve significantly higher view
synthesis quality compared to existing mesh-based approaches.Summary
AI-Generated Summary