ChatPaper.aiChatPaper

BoostMVSNeRFs:将基于MVS的NeRFs提升至大规模场景中的通用视图合成

BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes

July 22, 2024
作者: Chih-Hai Su, Chih-Yao Hu, Shr-Ruei Tsai, Jie-Ying Lee, Chin-Yang Lin, Yu-Lun Liu
cs.AI

摘要

尽管神经辐射场(NeRFs)展现出卓越的质量,但其漫长的训练时间仍然是一个限制因素。具有泛化能力且基于多视图结构(MVS)的NeRFs,虽然能够减少训练时间,但往往会在质量上产生折衷。本文提出了一种名为BoostMVSNeRFs的新方法,用于增强大规模场景中基于MVS的NeRFs的渲染质量。我们首先确定了基于MVS的NeRF方法的局限性,如受限的视口覆盖范围和由有限输入视图导致的伪影。然后,我们通过提出一种新方法来解决这些限制,该方法在体素渲染过程中选择并组合多个代价体积。我们的方法不需要训练,并且可以以前馈方式适应任何基于MVS的NeRF方法,以提高渲染质量。此外,我们的方法还可以进行端到端的训练,允许在特定场景上进行微调。我们通过在大规模数据集上进行实验展示了我们方法的有效性,显示了在大规模场景和无边界户外场景中显著的渲染质量改进。我们在https://su-terry.github.io/BoostMVSNeRFs/发布了BoostMVSNeRFs的源代码。
English
While Neural Radiance Fields (NeRFs) have demonstrated exceptional quality, their protracted training duration remains a limitation. Generalizable and MVS-based NeRFs, although capable of mitigating training time, often incur tradeoffs in quality. This paper presents a novel approach called BoostMVSNeRFs to enhance the rendering quality of MVS-based NeRFs in large-scale scenes. We first identify limitations in MVS-based NeRF methods, such as restricted viewport coverage and artifacts due to limited input views. Then, we address these limitations by proposing a new method that selects and combines multiple cost volumes during volume rendering. Our method does not require training and can adapt to any MVS-based NeRF methods in a feed-forward fashion to improve rendering quality. Furthermore, our approach is also end-to-end trainable, allowing fine-tuning on specific scenes. We demonstrate the effectiveness of our method through experiments on large-scale datasets, showing significant rendering quality improvements in large-scale scenes and unbounded outdoor scenarios. We release the source code of BoostMVSNeRFs at https://su-terry.github.io/BoostMVSNeRFs/.

Summary

AI-Generated Summary

PDF172November 28, 2024