SparseCraft:通过视差引导的几何线性化实现少样本神经重建
SparseCraft: Few-Shot Neural Reconstruction through Stereopsis Guided Geometric Linearization
July 19, 2024
作者: Mae Younes, Amine Ouasfi, Adnane Boukhayma
cs.AI
摘要
我们提出了一种新颖的方法,可以从少量彩色图像中恢复3D形状和视角相关外观,实现高效的3D重建和新视角合成。我们的方法通过学习隐式神经表示形式,即有符号距离函数(SDF)和辐射场,来实现这一目标。该模型通过射线行进启用的体积渲染逐步训练,并通过学习无关的多视图立体匹配(MVS)线索进行规范化。我们贡献的关键在于一种新颖的隐式神经形状函数学习策略,鼓励我们的SDF场在水平集附近尽可能线性化,从而使训练对来自监督和规范信号的噪声更具鲁棒性。在不使用任何预训练先验的情况下,我们的方法名为SparseCraft,在标准基准测试中在新视角合成和从稀疏视图重建方面均实现了最先进的性能,而且训练时间不到10分钟。
English
We present a novel approach for recovering 3D shape and view dependent
appearance from a few colored images, enabling efficient 3D reconstruction and
novel view synthesis. Our method learns an implicit neural representation in
the form of a Signed Distance Function (SDF) and a radiance field. The model is
trained progressively through ray marching enabled volumetric rendering, and
regularized with learning-free multi-view stereo (MVS) cues. Key to our
contribution is a novel implicit neural shape function learning strategy that
encourages our SDF field to be as linear as possible near the level-set, hence
robustifying the training against noise emanating from the supervision and
regularization signals. Without using any pretrained priors, our method, called
SparseCraft, achieves state-of-the-art performances both in novel-view
synthesis and reconstruction from sparse views in standard benchmarks, while
requiring less than 10 minutes for training.Summary
AI-Generated Summary