三維高斯飛濺的表面下散射
Subsurface Scattering for 3D Gaussian Splatting
August 22, 2024
作者: Jan-Niklas Dihlmann, Arjun Majumdar, Andreas Engelhardt, Raphael Braun, Hendrik P. A. Lensch
cs.AI
摘要
由散射材料製成的物體的3D重建和重新照明面臨著重大挑戰,這是由表面下複雜的光線傳輸所導致的。3D高斯飛濺引入了高質量的新視角合成,實時速度。儘管3D高斯有效地近似了物體的表面,但它們無法捕捉到次表面散射的體積特性。我們提出了一個框架,用於優化物體的形狀以及給定多視角OLAT(一次一個光源)數據的輻射轉移場。我們的方法將場景分解為一個明確的表面,以3D高斯表示,具有空間變化的BRDF,以及散射組件的隱式體積表示。一個學習的入射光場考慮陰影。我們通過射線追踪的可微渲染聯合優化所有參數。我們的方法實現了材料編輯、重新照明和新視角合成,並以互動速率展示。我們展示了對合成數據的成功應用,並介紹了在燈光舞台設置中獲得的新的多視角多光線數據集。與以往的工作相比,我們在優化和渲染時間的一小部分內實現了可比或更好的結果,同時實現了對材料屬性的詳細控制。項目頁面https://sss.jdihlmann.com/
English
3D reconstruction and relighting of objects made from scattering materials
present a significant challenge due to the complex light transport beneath the
surface. 3D Gaussian Splatting introduced high-quality novel view synthesis at
real-time speeds. While 3D Gaussians efficiently approximate an object's
surface, they fail to capture the volumetric properties of subsurface
scattering. We propose a framework for optimizing an object's shape together
with the radiance transfer field given multi-view OLAT (one light at a time)
data. Our method decomposes the scene into an explicit surface represented as
3D Gaussians, with a spatially varying BRDF, and an implicit volumetric
representation of the scattering component. A learned incident light field
accounts for shadowing. We optimize all parameters jointly via ray-traced
differentiable rendering. Our approach enables material editing, relighting and
novel view synthesis at interactive rates. We show successful application on
synthetic data and introduce a newly acquired multi-view multi-light dataset of
objects in a light-stage setup. Compared to previous work we achieve comparable
or better results at a fraction of optimization and rendering time while
enabling detailed control over material attributes. Project page
https://sss.jdihlmann.com/Summary
AI-Generated Summary