AvatarReX:即時表達性全身化身
AvatarReX: Real-time Expressive Full-body Avatars
May 8, 2023
作者: Zerong Zheng, Xiaochen Zhao, Hongwen Zhang, Boning Liu, Yebin Liu
cs.AI
摘要
我們提出了 AvatarReX,一種從視頻數據中學習基於 NeRF 的全身化身的新方法。所學化身不僅可以提供對身體、手部和面部的表達控制,還支持實時動畫和渲染。為此,我們提出了一種組合式化身表示,其中身體、手部和面部分別建模,以使參數化網格模板的結構先驗得到適當利用,同時不影響表示靈活性。此外,我們將每個部分的幾何和外觀解耦。通過這些技術設計,我們提出了一個專用的延遲渲染管道,可以以實時幀速執行,合成高質量的自由視圖圖像。幾何和外觀的解耦還使我們能夠設計一種兩過程訓練策略,將體積渲染和表面渲染結合起來進行網絡訓練。通過這種方式,可以應用基於補丁級的監督,迫使網絡學習基於幾何估計學習銳利的外觀細節。總的來說,我們的方法實現了具有實時渲染能力的表達豐富的全身化身的自動構建,並能為新的身體動作和面部表情生成具有動態細節的照片逼真圖像。
English
We present AvatarReX, a new method for learning NeRF-based full-body avatars
from video data. The learnt avatar not only provides expressive control of the
body, hands and the face together, but also supports real-time animation and
rendering. To this end, we propose a compositional avatar representation, where
the body, hands and the face are separately modeled in a way that the
structural prior from parametric mesh templates is properly utilized without
compromising representation flexibility. Furthermore, we disentangle the
geometry and appearance for each part. With these technical designs, we propose
a dedicated deferred rendering pipeline, which can be executed in real-time
framerate to synthesize high-quality free-view images. The disentanglement of
geometry and appearance also allows us to design a two-pass training strategy
that combines volume rendering and surface rendering for network training. In
this way, patch-level supervision can be applied to force the network to learn
sharp appearance details on the basis of geometry estimation. Overall, our
method enables automatic construction of expressive full-body avatars with
real-time rendering capability, and can generate photo-realistic images with
dynamic details for novel body motions and facial expressions.