HDR-GS:透過高斯塗抹以1000倍速度高效合成高動態範圍新視角
HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting
May 24, 2024
作者: Yuanhao Cai, Zihao Xiao, Yixun Liang, Yulun Zhang, Xiaokang Yang, Yaoyao Liu, Alan Yuille
cs.AI
摘要
高動態範圍(HDR)新視角合成(NVS)旨在使用HDR成像技術從新視角創建逼真的圖像。渲染的HDR圖像捕捉了比普通低動態範圍(LDR)圖像更多場景細節的更廣泛亮度範圍。現有的HDR NVS方法主要基於NeRF。它們受長時間訓練和緩慢推斷速度的困擾。本文提出了一個新的框架,高動態範圍高斯飛灰(HDR-GS),可以有效地渲染新的HDR視角並重建具有用戶輸入曝光時間的LDR圖像。具體來說,我們設計了一個雙動態範圍(DDR)高斯點雲模型,使用球面調和來擬合HDR顏色,並使用基於MLP的色調映射器來渲染LDR顏色。然後,HDR和LDR顏色被餵入兩個平行可微光柵化(PDR)過程以重建HDR和LDR視角。為了為基於3D高斯飛灰方法的HDR NVS研究建立數據基礎,我們重新校準相機參數並計算高斯點雲的初始位置。實驗表明,我們的HDR-GS在LDR和HDR NVS上超越了最先進的基於NeRF的方法,推斷速度提高了1000倍,僅需6.3%的訓練時間,LDR和HDR NVS分別提高了3.84和1.91 dB。
English
High dynamic range (HDR) novel view synthesis (NVS) aims to create
photorealistic images from novel viewpoints using HDR imaging techniques. The
rendered HDR images capture a wider range of brightness levels containing more
details of the scene than normal low dynamic range (LDR) images. Existing HDR
NVS methods are mainly based on NeRF. They suffer from long training time and
slow inference speed. In this paper, we propose a new framework, High Dynamic
Range Gaussian Splatting (HDR-GS), which can efficiently render novel HDR views
and reconstruct LDR images with a user input exposure time. Specifically, we
design a Dual Dynamic Range (DDR) Gaussian point cloud model that uses
spherical harmonics to fit HDR color and employs an MLP-based tone-mapper to
render LDR color. The HDR and LDR colors are then fed into two Parallel
Differentiable Rasterization (PDR) processes to reconstruct HDR and LDR views.
To establish the data foundation for the research of 3D Gaussian
splatting-based methods in HDR NVS, we recalibrate the camera parameters and
compute the initial positions for Gaussian point clouds. Experiments
demonstrate that our HDR-GS surpasses the state-of-the-art NeRF-based method by
3.84 and 1.91 dB on LDR and HDR NVS while enjoying 1000x inference speed and
only requiring 6.3% training time.