Human101:从1个视角在100秒内训练100+FPS的人类高斯模型
Human101: Training 100+FPS Human Gaussians in 100s from 1 View
December 23, 2023
作者: Mingwei Li, Jiachen Tao, Zongxin Yang, Yi Yang
cs.AI
摘要
从单视角视频重建人体在虚拟现实领域发挥着关键作用。一种普遍的应用场景需要快速重建高保真度的3D数字人类,同时确保实时渲染和交互。现有方法通常难以同时满足这两个要求。本文介绍了Human101,这是一个新颖的框架,能够通过在100秒内训练3D高斯模型并以100+ FPS渲染,生成高保真度的动态3D人体重建。我们的方法利用了3D高斯飘落的优势,提供了对3D人体的明确高效表示。Human101与基于NeRF的先前流程有所不同,它巧妙地应用了以人为中心的前向高斯动画方法来变形3D高斯模型的参数,从而提高了渲染速度(即以惊人的60+ FPS渲染1024分辨率图像,以及以100+ FPS渲染512分辨率图像)。实验结果表明,我们的方法明显超越了当前方法,帧速率增加了多达10倍,并提供了可比较或更优质的渲染质量。代码和演示将在https://github.com/longxiang-ai/Human101发布。
English
Reconstructing the human body from single-view videos plays a pivotal role in
the virtual reality domain. One prevalent application scenario necessitates the
rapid reconstruction of high-fidelity 3D digital humans while simultaneously
ensuring real-time rendering and interaction. Existing methods often struggle
to fulfill both requirements. In this paper, we introduce Human101, a novel
framework adept at producing high-fidelity dynamic 3D human reconstructions
from 1-view videos by training 3D Gaussians in 100 seconds and rendering in
100+ FPS. Our method leverages the strengths of 3D Gaussian Splatting, which
provides an explicit and efficient representation of 3D humans. Standing apart
from prior NeRF-based pipelines, Human101 ingeniously applies a Human-centric
Forward Gaussian Animation method to deform the parameters of 3D Gaussians,
thereby enhancing rendering speed (i.e., rendering 1024-resolution images at an
impressive 60+ FPS and rendering 512-resolution images at 100+ FPS).
Experimental results indicate that our approach substantially eclipses current
methods, clocking up to a 10 times surge in frames per second and delivering
comparable or superior rendering quality. Code and demos will be released at
https://github.com/longxiang-ai/Human101.