ChatPaper.aiChatPaper

DiMeR:解耦式网格重建模型

DiMeR: Disentangled Mesh Reconstruction Model

April 24, 2025
作者: Lutao Jiang, Jiantao Lin, Kanghao Chen, Wenhang Ge, Xin Yang, Yifan Jiang, Yuanhuiyi Lyu, Xu Zheng, Yingcong Chen
cs.AI

摘要

随着大规模三维数据集的兴起,前馈式三维生成模型,如大规模重建模型(LRM),已引起广泛关注并取得了显著成功。然而,我们观察到RGB图像常导致训练目标冲突,且缺乏几何重建所需的清晰度。本文重新审视了网格重建中的归纳偏差,并提出了DiMeR,一种新颖的解耦双流前馈模型,用于稀疏视角下的网格重建。其核心思想是将输入和框架分别解耦为几何与纹理两部分,从而依据奥卡姆剃刀原理降低每部分的训练难度。鉴于法线贴图与几何严格一致并能精确捕捉表面变化,我们将其作为几何分支的唯一输入,以降低网络输入与输出间的复杂性。此外,我们改进了网格提取算法,引入了三维真实值监督。对于纹理分支,则采用RGB图像作为输入,以获取带纹理的网格。总体而言,DiMeR在稀疏视角重建、单图转三维及文本转三维等多种任务中展现出强大的能力。大量实验表明,DiMeR显著超越以往方法,在GSO和OmniObject3D数据集上的Chamfer距离提升了超过30%。
English
With the advent of large-scale 3D datasets, feed-forward 3D generative models, such as the Large Reconstruction Model (LRM), have gained significant attention and achieved remarkable success. However, we observe that RGB images often lead to conflicting training objectives and lack the necessary clarity for geometry reconstruction. In this paper, we revisit the inductive biases associated with mesh reconstruction and introduce DiMeR, a novel disentangled dual-stream feed-forward model for sparse-view mesh reconstruction. The key idea is to disentangle both the input and framework into geometry and texture parts, thereby reducing the training difficulty for each part according to the Principle of Occam's Razor. Given that normal maps are strictly consistent with geometry and accurately capture surface variations, we utilize normal maps as exclusive input for the geometry branch to reduce the complexity between the network's input and output. Moreover, we improve the mesh extraction algorithm to introduce 3D ground truth supervision. As for texture branch, we use RGB images as input to obtain the textured mesh. Overall, DiMeR demonstrates robust capabilities across various tasks, including sparse-view reconstruction, single-image-to-3D, and text-to-3D. Numerous experiments show that DiMeR significantly outperforms previous methods, achieving over 30% improvement in Chamfer Distance on the GSO and OmniObject3D dataset.

Summary

AI-Generated Summary

PDF202April 25, 2025