ChatPaper.aiChatPaper

GS^3: 三重高斯飛濺的高效再照明

GS^3: Efficient Relighting with Triple Gaussian Splatting

October 15, 2024
作者: Zoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei, Xiang Feng, Kun Zhou, Hongzhi Wu
cs.AI

摘要

我們提出了一種基於空間和角度高斯模型的表示法和三重噴潑過程,用於從多視角點光照輸入圖像中實時高質量的新照明和視圖合成。為了描述複雜的外觀,我們採用了Lambertian加上一個角度高斯混合物作為每個空間高斯的有效反射函數。為了生成自身陰影,我們將所有空間高斯向光源噴潑以獲得陰影值,這些值進一步通過一個小型多層感知器進行精煉。為了補償其他效果,如全域照明,另一個網絡被訓練來計算並添加每個空間高斯的RGB元組。我們的表示法的有效性在30個樣本上得到了展示,這些樣本在幾何形狀(從固體到蓬鬆)和外觀(從半透明到各向異性)方面變化很大,並使用不同形式的輸入數據,包括合成/重建物體的渲染圖像、使用手持相機和閃光燈拍攝的照片,或從專業燈箱中獲得的圖像。我們在單個普通GPU上實現了40-70分鐘的訓練時間和90 fps的渲染速度。我們的結果在質量/性能方面與最先進的技術相比具有競爭力。我們的代碼和數據可在https://GSrelight.github.io/ 公開獲得。
English
We present a spatial and angular Gaussian based representation and a triple splatting process, for real-time, high-quality novel lighting-and-view synthesis from multi-view point-lit input images. To describe complex appearance, we employ a Lambertian plus a mixture of angular Gaussians as an effective reflectance function for each spatial Gaussian. To generate self-shadow, we splat all spatial Gaussians towards the light source to obtain shadow values, which are further refined by a small multi-layer perceptron. To compensate for other effects like global illumination, another network is trained to compute and add a per-spatial-Gaussian RGB tuple. The effectiveness of our representation is demonstrated on 30 samples with a wide variation in geometry (from solid to fluffy) and appearance (from translucent to anisotropic), as well as using different forms of input data, including rendered images of synthetic/reconstructed objects, photographs captured with a handheld camera and a flash, or from a professional lightstage. We achieve a training time of 40-70 minutes and a rendering speed of 90 fps on a single commodity GPU. Our results compare favorably with state-of-the-art techniques in terms of quality/performance. Our code and data are publicly available at https://GSrelight.github.io/.

Summary

AI-Generated Summary

PDF122November 16, 2024