從稀疏視頻生成可重新照明和動畫化的神經頭像
Relightable and Animatable Neural Avatar from Sparse-View Video
August 15, 2023
作者: Zhen Xu, Sida Peng, Chen Geng, Linzhan Mou, Zihan Yan, Jiaming Sun, Hujun Bao, Xiaowei Zhou
cs.AI
摘要
本文探討從動態人類的稀疏視圖(甚至單眼)視頻中創建可重新照明和動畫化的神經頭像的挑戰,並且在未知照明條件下進行。與工作室環境相比,這種設置更實用和易於訪問,但提出了一個極具挑戰性的不透明問題。先前的神經人體重建方法能夠使用變形的有符號距離場(SDF)從稀疏視圖中重建可動的頭像,但無法恢復用於重新照明的材料參數。儘管基於可微逆渲染的方法成功地恢復了靜態物體的材料,但將其擴展到動態人類並不簡單,因為在變形的SDF上計算像素-表面交點和光能見度對於逆渲染而言計算量巨大。為了解決這一挑戰,我們提出了一種分層距離查詢(HDQ)算法,以近似在任意人體姿勢下的世界空間距離。具體而言,我們基於一個參數化人體模型估計粗略距離,並通過利用SDF的局部變形不變性計算精細距離。基於HDQ算法,我們利用球追蹤來高效估計表面交點和光能見度。這使我們能夠開發第一個從稀疏視圖(或單眼)輸入中恢復可動和可重新照明的神經頭像的系統。實驗表明,我們的方法能夠產生優越的結果,優於最先進的方法。我們的代碼將被釋放以實現可重現性。
English
This paper tackles the challenge of creating relightable and animatable
neural avatars from sparse-view (or even monocular) videos of dynamic humans
under unknown illumination. Compared to studio environments, this setting is
more practical and accessible but poses an extremely challenging ill-posed
problem. Previous neural human reconstruction methods are able to reconstruct
animatable avatars from sparse views using deformed Signed Distance Fields
(SDF) but cannot recover material parameters for relighting. While
differentiable inverse rendering-based methods have succeeded in material
recovery of static objects, it is not straightforward to extend them to dynamic
humans as it is computationally intensive to compute pixel-surface intersection
and light visibility on deformed SDFs for inverse rendering. To solve this
challenge, we propose a Hierarchical Distance Query (HDQ) algorithm to
approximate the world space distances under arbitrary human poses.
Specifically, we estimate coarse distances based on a parametric human model
and compute fine distances by exploiting the local deformation invariance of
SDF. Based on the HDQ algorithm, we leverage sphere tracing to efficiently
estimate the surface intersection and light visibility. This allows us to
develop the first system to recover animatable and relightable neural avatars
from sparse view (or monocular) inputs. Experiments demonstrate that our
approach is able to produce superior results compared to state-of-the-art
methods. Our code will be released for reproducibility.