GS^3:三重高斯飞溅的高效重照明
GS^3: Efficient Relighting with Triple Gaussian Splatting
October 15, 2024
作者: Zoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei, Xiang Feng, Kun Zhou, Hongzhi Wu
cs.AI
摘要
我们提出了基于空间和角度高斯表示以及三重分层过程的方法,用于从多视点照明输入图像实时、高质量地合成新颖的光照和视图。为了描述复杂外观,我们采用朗伯加上混合角度高斯作为每个空间高斯的有效反射函数。为了生成自阴影,我们将所有空间高斯向光源投射以获得阴影值,然后通过一个小型多层感知器进一步细化。为了补偿其他效果,如全局光照,另一个网络被训练用于计算并添加每个空间高斯的RGB元组。我们的表示方法在30个样本上展示了其有效性,这些样本在几何形状(从实心到蓬松)和外观(从半透明到各向异性)上变化很大,并使用不同形式的输入数据,包括合成/重建对象的渲染图像、手持相机和闪光灯拍摄的照片,或来自专业光场的图像。我们在单个普通GPU上实现了40-70分钟的训练时间和90 fps的渲染速度。我们的结果在质量/性能方面与最先进的技术相比具有竞争力。我们的代码和数据可在https://GSrelight.github.io/ 上公开获取。
English
We present a spatial and angular Gaussian based representation and a triple
splatting process, for real-time, high-quality novel lighting-and-view
synthesis from multi-view point-lit input images. To describe complex
appearance, we employ a Lambertian plus a mixture of angular Gaussians as an
effective reflectance function for each spatial Gaussian. To generate
self-shadow, we splat all spatial Gaussians towards the light source to obtain
shadow values, which are further refined by a small multi-layer perceptron. To
compensate for other effects like global illumination, another network is
trained to compute and add a per-spatial-Gaussian RGB tuple. The effectiveness
of our representation is demonstrated on 30 samples with a wide variation in
geometry (from solid to fluffy) and appearance (from translucent to
anisotropic), as well as using different forms of input data, including
rendered images of synthetic/reconstructed objects, photographs captured with a
handheld camera and a flash, or from a professional lightstage. We achieve a
training time of 40-70 minutes and a rendering speed of 90 fps on a single
commodity GPU. Our results compare favorably with state-of-the-art techniques
in terms of quality/performance. Our code and data are publicly available at
https://GSrelight.github.io/.Summary
AI-Generated Summary