ChatPaper.aiChatPaper

GTR:通过几何和纹理细化改进大型3D重建模型

GTR: Improving Large 3D Reconstruction Models through Geometry and Texture Refinement

June 9, 2024
作者: Peiye Zhuang, Songfang Han, Chaoyang Wang, Aliaksandr Siarohin, Jiaxu Zou, Michael Vasilkovsky, Vladislav Shakhrai, Sergey Korolev, Sergey Tulyakov, Hsin-Ying Lee
cs.AI

摘要

我们提出了一种新颖的方法,用于从多视图图像进行三维网格重建。我们的方法受到大型重建模型(如LRM)的启发,该模型使用基于Transformer的三面生成器和在多视图图像上训练的神经辐射场(NeRF)模型。然而,在我们的方法中,我们引入了几项重要修改,使我们能够显著提高三维重建质量。首先,我们检查了原始LRM架构并发现了一些缺点。随后,我们对LRM架构进行了相应修改,这些修改导致了改进的多视图图像表示和更高效的训练。其次,为了改善几何重建并实现在完整图像分辨率下的监督,我们以可微分的方式从NeRF场中提取网格,并通过网格渲染微调NeRF模型。这些修改使我们能够在2D和3D评估指标上实现最先进的性能,例如在Google扫描对象(GSO)数据集上达到28.67的峰值信噪比(PSNR)。尽管取得了这些优越的结果,我们的前馈模型仍然难以重建复杂的纹理,如资产上的文本和肖像。为了解决这个问题,我们引入了一种轻量级的逐实例纹理细化过程。该过程通过在网格表面上使用输入的多视图图像,仅用4秒钟的时间微调三面表示和NeRF颜色估计模型。这种细化将PSNR提高到29.79,并实现了对复杂纹理(如文本)的忠实重建。此外,我们的方法还支持各种下游应用,包括文本或图像到三维生成。
English
We propose a novel approach for 3D mesh reconstruction from multi-view images. Our method takes inspiration from large reconstruction models like LRM that use a transformer-based triplane generator and a Neural Radiance Field (NeRF) model trained on multi-view images. However, in our method, we introduce several important modifications that allow us to significantly enhance 3D reconstruction quality. First of all, we examine the original LRM architecture and find several shortcomings. Subsequently, we introduce respective modifications to the LRM architecture, which lead to improved multi-view image representation and more computationally efficient training. Second, in order to improve geometry reconstruction and enable supervision at full image resolution, we extract meshes from the NeRF field in a differentiable manner and fine-tune the NeRF model through mesh rendering. These modifications allow us to achieve state-of-the-art performance on both 2D and 3D evaluation metrics, such as a PSNR of 28.67 on Google Scanned Objects (GSO) dataset. Despite these superior results, our feed-forward model still struggles to reconstruct complex textures, such as text and portraits on assets. To address this, we introduce a lightweight per-instance texture refinement procedure. This procedure fine-tunes the triplane representation and the NeRF color estimation model on the mesh surface using the input multi-view images in just 4 seconds. This refinement improves the PSNR to 29.79 and achieves faithful reconstruction of complex textures, such as text. Additionally, our approach enables various downstream applications, including text- or image-to-3D generation.

Summary

AI-Generated Summary

PDF120December 8, 2024