ChatPaper.aiChatPaper

IllumiNeRF:無需逆向渲染的3D重新照明

IllumiNeRF: 3D Relighting without Inverse Rendering

June 10, 2024
作者: Xiaoming Zhao, Pratul P. Srinivasan, Dor Verbin, Keunhong Park, Ricardo Martin Brualla, Philipp Henzler
cs.AI

摘要

現有的可重燈式視角合成方法是基於逆向渲染,利用一組物體在未知照明下的影像來恢復一個3D表示,該表示可以在目標照明下從新視角進行渲染。這些方法試圖將解釋輸入影像的物體幾何形狀、材料和照明分離開來。此外,通常通過可微分蒙特卡羅渲染進行優化,但這種方法脆弱且計算成本高昂。在本研究中,我們提出了一種更簡單的方法:首先使用一個受照明條件影響的影像擴散模型對每個輸入影像進行重燈,然後使用這些重燈影像重建神經輻射場(NeRF),從中在目標照明下渲染新視角。我們展示了這種策略出奇地具有競爭力,並在多個重燈基準測試中取得了最先進的結果。請參閱我們的項目頁面:https://illuminerf.github.io/。
English
Existing methods for relightable view synthesis -- using a set of images of an object under unknown lighting to recover a 3D representation that can be rendered from novel viewpoints under a target illumination -- are based on inverse rendering, and attempt to disentangle the object geometry, materials, and lighting that explain the input images. Furthermore, this typically involves optimization through differentiable Monte Carlo rendering, which is brittle and computationally-expensive. In this work, we propose a simpler approach: we first relight each input image using an image diffusion model conditioned on lighting and then reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting. We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks. Please see our project page at https://illuminerf.github.io/.

Summary

AI-Generated Summary

PDF130December 8, 2024