用于3D高斯飞溅的次表面散射。
Subsurface Scattering for 3D Gaussian Splatting
August 22, 2024
作者: Jan-Niklas Dihlmann, Arjun Majumdar, Andreas Engelhardt, Raphael Braun, Hendrik P. A. Lensch
cs.AI
摘要
由散射材料制成的物体的3D重建和重照提出了重大挑战,因为表面下的光传输复杂。3D高斯飞溅技术以实时速度引入了高质量的新视角合成。尽管3D高斯方法有效地近似了物体表面,但未能捕捉到表面下散射的体积特性。我们提出了一个框架,通过多视角OLAT(一次一个光源)数据,优化物体的形状和辐射传输场。我们的方法将场景分解为显式表面(以3D高斯表示)和空间变化的BRDF,以及散射组件的隐式体积表示。一个学习到的入射光场考虑了阴影。我们通过射线追踪可微渲染联合优化所有参数。我们的方法实现了材料编辑、重照和新视角合成,并以交互速率展示。我们展示了在合成数据上的成功应用,并介绍了在灯光舞台设置中获取的新的多视角多光源数据集。与先前工作相比,我们在一小部分优化和渲染时间内实现了可比或更好的结果,同时实现了对材料属性的详细控制。项目页面:https://sss.jdihlmann.com/
English
3D reconstruction and relighting of objects made from scattering materials
present a significant challenge due to the complex light transport beneath the
surface. 3D Gaussian Splatting introduced high-quality novel view synthesis at
real-time speeds. While 3D Gaussians efficiently approximate an object's
surface, they fail to capture the volumetric properties of subsurface
scattering. We propose a framework for optimizing an object's shape together
with the radiance transfer field given multi-view OLAT (one light at a time)
data. Our method decomposes the scene into an explicit surface represented as
3D Gaussians, with a spatially varying BRDF, and an implicit volumetric
representation of the scattering component. A learned incident light field
accounts for shadowing. We optimize all parameters jointly via ray-traced
differentiable rendering. Our approach enables material editing, relighting and
novel view synthesis at interactive rates. We show successful application on
synthetic data and introduce a newly acquired multi-view multi-light dataset of
objects in a light-stage setup. Compared to previous work we achieve comparable
or better results at a fraction of optimization and rendering time while
enabling detailed control over material attributes. Project page
https://sss.jdihlmann.com/Summary
AI-Generated Summary