FrugalNeRF:无需学习先验知识的少样本新视角合成快速收敛
FrugalNeRF: Fast Convergence for Few-shot Novel View Synthesis without Learned Priors
October 21, 2024
作者: Chin-Yang Lin, Chung-Ho Wu, Chang-Han Yeh, Shih-Han Yen, Cheng Sun, Yu-Lun Liu
cs.AI
摘要
在少样本情况下,神经辐射场(Neural Radiance Fields,NeRF)面临着重大挑战,主要是由于过拟合和高保真渲染的长时间训练。现有方法,如FreeNeRF和SparseNeRF,使用频率正则化或预训练先验,但在复杂调度和偏差方面存在困难。我们引入了FrugalNeRF,这是一种新颖的少样本NeRF框架,它利用跨多个尺度共享权重的体素来高效表示场景细节。我们的关键贡献是跨尺度几何适应方案,根据跨尺度的重投影误差选择伪地面实际深度。这在训练过程中引导,而无需依赖外部学习的先验知识,从而充分利用训练数据。它还可以集成预训练先验,提高质量而不减慢收敛速度。在LLFF、DTU和RealEstate-10K上的实验证明,FrugalNeRF优于其他少样本NeRF方法,同时显著减少训练时间,使其成为高效准确的3D场景重建的实用解决方案。
English
Neural Radiance Fields (NeRF) face significant challenges in few-shot
scenarios, primarily due to overfitting and long training times for
high-fidelity rendering. Existing methods, such as FreeNeRF and SparseNeRF, use
frequency regularization or pre-trained priors but struggle with complex
scheduling and bias. We introduce FrugalNeRF, a novel few-shot NeRF framework
that leverages weight-sharing voxels across multiple scales to efficiently
represent scene details. Our key contribution is a cross-scale geometric
adaptation scheme that selects pseudo ground truth depth based on reprojection
errors across scales. This guides training without relying on externally
learned priors, enabling full utilization of the training data. It can also
integrate pre-trained priors, enhancing quality without slowing convergence.
Experiments on LLFF, DTU, and RealEstate-10K show that FrugalNeRF outperforms
other few-shot NeRF methods while significantly reducing training time, making
it a practical solution for efficient and accurate 3D scene reconstruction.Summary
AI-Generated Summary