ChatPaper.aiChatPaper

IllumiNeRF:无需逆渲染的3D重新照明

IllumiNeRF: 3D Relighting without Inverse Rendering

June 10, 2024
作者: Xiaoming Zhao, Pratul P. Srinivasan, Dor Verbin, Keunhong Park, Ricardo Martin Brualla, Philipp Henzler
cs.AI

摘要

现有的可重光视图合成方法是基于逆渲染的,利用一组对象在未知光照下的图像来恢复一个能够在目标照明下从新视点渲染的三维表示。这些方法尝试分离解释输入图像的对象几何、材质和光照。此外,这通常涉及通过可微分的蒙特卡洛渲染进行优化,这种方法脆弱且计算成本高昂。在这项工作中,我们提出了一种更简单的方法:首先使用一个以光照为条件的图像扩散模型对每个输入图像进行重光,然后利用这些重光图像重建一个神经辐射场(NeRF),从中我们可以在目标光照下渲染新视图。我们展示了这种策略出人意料地具有竞争力,并在多个重光基准测试中取得了最先进的结果。请访问我们的项目页面:https://illuminerf.github.io/。
English
Existing methods for relightable view synthesis -- using a set of images of an object under unknown lighting to recover a 3D representation that can be rendered from novel viewpoints under a target illumination -- are based on inverse rendering, and attempt to disentangle the object geometry, materials, and lighting that explain the input images. Furthermore, this typically involves optimization through differentiable Monte Carlo rendering, which is brittle and computationally-expensive. In this work, we propose a simpler approach: we first relight each input image using an image diffusion model conditioned on lighting and then reconstruct a Neural Radiance Field (NeRF) with these relit images, from which we render novel views under the target lighting. We demonstrate that this strategy is surprisingly competitive and achieves state-of-the-art results on multiple relighting benchmarks. Please see our project page at https://illuminerf.github.io/.

Summary

AI-Generated Summary

PDF130December 8, 2024