GPS-Gaussian:通用像素级3D高斯飞溅用于实时人类新视角合成
GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis
December 4, 2023
作者: Shunyuan Zheng, Boyao Zhou, Ruizhi Shao, Boning Liu, Shengping Zhang, Liqiang Nie, Yebin Liu
cs.AI
摘要
我们提出了一种名为GPS-Gaussian的新方法,用于以实时方式合成角色的新视图。所提出的方法在稀疏视图相机设置下实现了2K分辨率渲染。与原始的高斯点渲染或神经隐式渲染方法不同,这些方法需要对每个主体进行优化,我们引入了在源视图上定义的高斯参数图,并直接回归高斯点渲染属性,以便立即合成新视图,无需任何微调或优化。为此,我们在大量人体扫描数据上训练我们的高斯参数回归模块,同时结合深度估计模块将2D参数图提升到3D空间。所提出的框架是完全可微的,对几个数据集的实验表明,我们的方法在实现超越渲染速度的同时优于最先进的方法。
English
We present a new approach, termed GPS-Gaussian, for synthesizing novel views
of a character in a real-time manner. The proposed method enables 2K-resolution
rendering under a sparse-view camera setting. Unlike the original Gaussian
Splatting or neural implicit rendering methods that necessitate per-subject
optimizations, we introduce Gaussian parameter maps defined on the source views
and regress directly Gaussian Splatting properties for instant novel view
synthesis without any fine-tuning or optimization. To this end, we train our
Gaussian parameter regression module on a large amount of human scan data,
jointly with a depth estimation module to lift 2D parameter maps to 3D space.
The proposed framework is fully differentiable and experiments on several
datasets demonstrate that our method outperforms state-of-the-art methods while
achieving an exceeding rendering speed.