稀疏视角视频中可重新照明和动画化的神经化身
Relightable and Animatable Neural Avatar from Sparse-View Video
August 15, 2023
作者: Zhen Xu, Sida Peng, Chen Geng, Linzhan Mou, Zihan Yan, Jiaming Sun, Hujun Bao, Xiaowei Zhou
cs.AI
摘要
本文解决了从稀疏视角(甚至单目)动态人类视频中创建可重新照明和可动画化的神经化身的挑战,而这些视频处于未知照明下。与工作室环境相比,这种设置更为实用和可访问,但提出了一个极具挑战性的不适定问题。先前的神经人类重建方法能够使用变形的符号距离场(SDF)从稀疏视角重建可动画化的化身,但无法恢复用于重新照明的材料参数。虽然基于可微逆渲染的方法已成功恢复了静态物体的材料,但将其扩展到动态人类并不直接,因为在变形的SDF上计算像素-表面交点和光能见度对于逆渲染而言计算量巨大。为了解决这一挑战,我们提出了一种分层距离查询(HDQ)算法,以近似在任意人类姿势下的世界空间距离。具体而言,我们基于参数化人类模型估计粗略距离,并通过利用SDF的局部变形不变性计算细致距离。基于HDQ算法,我们利用球追踪来高效估计表面交点和光能见度。这使我们能够开发出第一个能够从稀疏视角(或单目)输入中恢复可动画化和可重新照明的神经化身的系统。实验证明,与最先进方法相比,我们的方法能够产生更优异的结果。我们的代码将会发布以供复现。
English
This paper tackles the challenge of creating relightable and animatable
neural avatars from sparse-view (or even monocular) videos of dynamic humans
under unknown illumination. Compared to studio environments, this setting is
more practical and accessible but poses an extremely challenging ill-posed
problem. Previous neural human reconstruction methods are able to reconstruct
animatable avatars from sparse views using deformed Signed Distance Fields
(SDF) but cannot recover material parameters for relighting. While
differentiable inverse rendering-based methods have succeeded in material
recovery of static objects, it is not straightforward to extend them to dynamic
humans as it is computationally intensive to compute pixel-surface intersection
and light visibility on deformed SDFs for inverse rendering. To solve this
challenge, we propose a Hierarchical Distance Query (HDQ) algorithm to
approximate the world space distances under arbitrary human poses.
Specifically, we estimate coarse distances based on a parametric human model
and compute fine distances by exploiting the local deformation invariance of
SDF. Based on the HDQ algorithm, we leverage sphere tracing to efficiently
estimate the surface intersection and light visibility. This allows us to
develop the first system to recover animatable and relightable neural avatars
from sparse view (or monocular) inputs. Experiments demonstrate that our
approach is able to produce superior results compared to state-of-the-art
methods. Our code will be released for reproducibility.