Human101:基於單一視角在100秒內訓練100+FPS人體高斯模型的技術
Human101: Training 100+FPS Human Gaussians in 100s from 1 View
December 23, 2023
作者: Mingwei Li, Jiachen Tao, Zongxin Yang, Yi Yang
cs.AI
摘要
基於單視角影片的人體重建技術在虛擬實境領域具有關鍵作用。當前主流應用場景要求既能快速重建高擬真度3D數字人體,又能同步實現即時渲染與互動。現有方法往往難以同時滿足這兩項需求。本文提出Human101創新框架,該框架能從單視角影片中生成高擬真度動態3D人體重建效果,僅需100秒即可完成3D高斯模型訓練,並實現100+ FPS的渲染效能。我們的方法充分發揮3D高斯潑濺技術優勢,該技術能為3D人體提供顯式且高效的表示。有別於先前基於神經輻射場的流程,Human101創新性地採用以人體為導向的前向高斯動畫方法,通過變形3D高斯參數實現渲染速度的顯著提升(即在1024解析度下達到60+ FPS,512解析度下實現100+ FPS)。實驗結果表明,本方法在渲染幀率上較現有技術提升達10倍,同時呈現出相當或更優的渲染品質。程式碼與演示內容將發佈於https://github.com/longxiang-ai/Human101。
English
Reconstructing the human body from single-view videos plays a pivotal role in
the virtual reality domain. One prevalent application scenario necessitates the
rapid reconstruction of high-fidelity 3D digital humans while simultaneously
ensuring real-time rendering and interaction. Existing methods often struggle
to fulfill both requirements. In this paper, we introduce Human101, a novel
framework adept at producing high-fidelity dynamic 3D human reconstructions
from 1-view videos by training 3D Gaussians in 100 seconds and rendering in
100+ FPS. Our method leverages the strengths of 3D Gaussian Splatting, which
provides an explicit and efficient representation of 3D humans. Standing apart
from prior NeRF-based pipelines, Human101 ingeniously applies a Human-centric
Forward Gaussian Animation method to deform the parameters of 3D Gaussians,
thereby enhancing rendering speed (i.e., rendering 1024-resolution images at an
impressive 60+ FPS and rendering 512-resolution images at 100+ FPS).
Experimental results indicate that our approach substantially eclipses current
methods, clocking up to a 10 times surge in frames per second and delivering
comparable or superior rendering quality. Code and demos will be released at
https://github.com/longxiang-ai/Human101.