ChatPaper.aiChatPaper

Human101:基于单视图百秒训练100+FPS人体高斯模型

Human101: Training 100+FPS Human Gaussians in 100s from 1 View

December 23, 2023
作者: Mingwei Li, Jiachen Tao, Zongxin Yang, Yi Yang
cs.AI

摘要

基于单目视频的人体重建在虚拟现实领域具有关键作用。当前主流应用场景要求在保证实时渲染与交互的同时,快速重建高保真三维数字人体。现有方法往往难以同时满足这两项需求。本文提出Human101创新框架,能够从单目视频中通过100秒训练3D高斯模型,实现100+ FPS的高保真动态三维人体重建。该方法充分发挥3D高斯泼溅技术的优势,以显式高效的方式表征三维人体。与基于神经辐射场的传统流程不同,Human101创新性地采用以人为中心的前向高斯形变动画技术,通过变形3D高斯参数显著提升渲染速度(即可实现1024分辨率图像60+ FPS、512分辨率图像100+ FPS的渲染性能)。实验结果表明,本方法在渲染质量相当或更优的前提下,帧率较现有技术提升高达10倍。代码与演示内容将发布于https://github.com/longxiang-ai/Human101。
English
Reconstructing the human body from single-view videos plays a pivotal role in the virtual reality domain. One prevalent application scenario necessitates the rapid reconstruction of high-fidelity 3D digital humans while simultaneously ensuring real-time rendering and interaction. Existing methods often struggle to fulfill both requirements. In this paper, we introduce Human101, a novel framework adept at producing high-fidelity dynamic 3D human reconstructions from 1-view videos by training 3D Gaussians in 100 seconds and rendering in 100+ FPS. Our method leverages the strengths of 3D Gaussian Splatting, which provides an explicit and efficient representation of 3D humans. Standing apart from prior NeRF-based pipelines, Human101 ingeniously applies a Human-centric Forward Gaussian Animation method to deform the parameters of 3D Gaussians, thereby enhancing rendering speed (i.e., rendering 1024-resolution images at an impressive 60+ FPS and rendering 512-resolution images at 100+ FPS). Experimental results indicate that our approach substantially eclipses current methods, clocking up to a 10 times surge in frames per second and delivering comparable or superior rendering quality. Code and demos will be released at https://github.com/longxiang-ai/Human101.
PDF101December 15, 2024