ChatPaper.aiChatPaper

SparseCraft:透過立體視引導的幾何線性化進行少樣本神經重建

SparseCraft: Few-Shot Neural Reconstruction through Stereopsis Guided Geometric Linearization

July 19, 2024
作者: Mae Younes, Amine Ouasfi, Adnane Boukhayma
cs.AI

摘要

我們提出了一種新穎的方法,可以從少量彩色圖像中恢復3D形狀和視角相依外觀,從而實現高效的3D重建和新視角合成。我們的方法通過學習隱式神經表示形式,即符號距離函數(SDF)和輻射場,來實現。該模型通過允許體素渲染的射線行進進行漸進式訓練,並通過學習無需多視圖立體(MVS)線索進行正則化。我們貢獻的關鍵在於一種新穎的隱式神經形狀函數學習策略,該策略鼓勵我們的SDF場在水平集附近盡可能線性化,從而使訓練對來自監督和正則化信號的噪聲更具魯棒性。在不使用任何預訓練先驗的情況下,我們的方法,稱為SparseCraft,在標準基準測試中在新視角合成和從稀疏視圖中重建方面均實現了最先進的性能,訓練時間不到10分鐘。
English
We present a novel approach for recovering 3D shape and view dependent appearance from a few colored images, enabling efficient 3D reconstruction and novel view synthesis. Our method learns an implicit neural representation in the form of a Signed Distance Function (SDF) and a radiance field. The model is trained progressively through ray marching enabled volumetric rendering, and regularized with learning-free multi-view stereo (MVS) cues. Key to our contribution is a novel implicit neural shape function learning strategy that encourages our SDF field to be as linear as possible near the level-set, hence robustifying the training against noise emanating from the supervision and regularization signals. Without using any pretrained priors, our method, called SparseCraft, achieves state-of-the-art performances both in novel-view synthesis and reconstruction from sparse views in standard benchmarks, while requiring less than 10 minutes for training.

Summary

AI-Generated Summary

PDF52November 28, 2024