ChatPaper.aiChatPaper

BoostMVSNeRFs:將基於 MVS 的 NeRFs 強化至大規模場景中的通用視圖合成

BoostMVSNeRFs: Boosting MVS-based NeRFs to Generalizable View Synthesis in Large-scale Scenes

July 22, 2024
作者: Chih-Hai Su, Chih-Yao Hu, Shr-Ruei Tsai, Jie-Ying Lee, Chin-Yang Lin, Yu-Lun Liu
cs.AI

摘要

儘管神經輻射場(Neural Radiance Fields,NeRFs)展示出卓越的品質,但其漫長的訓練時間仍然是一個限制。具有泛化能力並基於多視角結構(MVS)的 NeRFs,雖然能夠減少訓練時間,但通常會在品質上產生折衷。本文提出了一種名為 BoostMVSNeRFs 的新方法,以增強大型場景中基於 MVS 的 NeRFs 的渲染品質。我們首先確定了基於 MVS 的 NeRF 方法的限制,例如受限的視口覆蓋範圍和由於有限輸入視圖而產生的瑕疵。然後,我們通過提出一種新方法來解決這些限制,該方法在體積渲染期間選擇並組合多個成本體積。我們的方法不需要訓練,可以以前向傳播的方式適應任何基於 MVS 的 NeRF 方法以改善渲染品質。此外,我們的方法也是端到端可訓練的,可以對特定場景進行微調。我們通過在大型數據集上進行實驗展示了我們方法的有效性,顯示在大型場景和無限的戶外場景中顯著提高了渲染品質。我們在 https://su-terry.github.io/BoostMVSNeRFs/ 上公開了 BoostMVSNeRFs 的源代碼。
English
While Neural Radiance Fields (NeRFs) have demonstrated exceptional quality, their protracted training duration remains a limitation. Generalizable and MVS-based NeRFs, although capable of mitigating training time, often incur tradeoffs in quality. This paper presents a novel approach called BoostMVSNeRFs to enhance the rendering quality of MVS-based NeRFs in large-scale scenes. We first identify limitations in MVS-based NeRF methods, such as restricted viewport coverage and artifacts due to limited input views. Then, we address these limitations by proposing a new method that selects and combines multiple cost volumes during volume rendering. Our method does not require training and can adapt to any MVS-based NeRF methods in a feed-forward fashion to improve rendering quality. Furthermore, our approach is also end-to-end trainable, allowing fine-tuning on specific scenes. We demonstrate the effectiveness of our method through experiments on large-scale datasets, showing significant rendering quality improvements in large-scale scenes and unbounded outdoor scenarios. We release the source code of BoostMVSNeRFs at https://su-terry.github.io/BoostMVSNeRFs/.

Summary

AI-Generated Summary

PDF172November 28, 2024