RRM:使用輻射引導材料提取的可重新照明資產
RRM: Relightable assets using Radiance guided Material extraction
July 8, 2024
作者: Diego Gomez, Julien Philip, Adrien Kaiser, Élie Michel
cs.AI
摘要
在過去幾年中,以任意光線合成 NeRFs 已成為一個重要問題。最近的研究致力於通過提取基於物理的參數來解決這個問題,這些參數可以在任意光線下進行渲染,但它們在能處理的場景範圍上受到限制,通常對光澤場景處理不當。我們提出了 RRM,一種方法,即使在存在高度反射物體的情況下,也可以提取場景的材料、幾何和環境光照。我們的方法包括一種具有物理感知的輻射場表示,該表示通知基於物理的參數,以及基於拉普拉斯金字塔的表達性環境光結構。我們展示了我們的貢獻在參數檢索任務上優於最先進技術,從而實現了對表面場景的高保真燈光效果和新視角合成。
English
Synthesizing NeRFs under arbitrary lighting has become a seminal problem in
the last few years. Recent efforts tackle the problem via the extraction of
physically-based parameters that can then be rendered under arbitrary lighting,
but they are limited in the range of scenes they can handle, usually
mishandling glossy scenes. We propose RRM, a method that can extract the
materials, geometry, and environment lighting of a scene even in the presence
of highly reflective objects. Our method consists of a physically-aware
radiance field representation that informs physically-based parameters, and an
expressive environment light structure based on a Laplacian Pyramid. We
demonstrate that our contributions outperform the state-of-the-art on parameter
retrieval tasks, leading to high-fidelity relighting and novel view synthesis
on surfacic scenes.Summary
AI-Generated Summary