SceNeRFlow:通用动态场景的时间一致性重建
SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes
August 16, 2023
作者: Edith Tretschk, Vladislav Golyanik, Michael Zollhoefer, Aljaz Bozic, Christoph Lassner, Christian Theobalt
cs.AI
摘要
当前针对非刚性变形物体的四维重建方法主要侧重于新视角合成,而忽略了对应关系。然而,时间一致性能够支持三维编辑、运动分析或虚拟资产创建等高级下游任务。我们提出SceNeRFlow方法,以实现对通用非刚性场景的时间一致性重建。我们的动态神经辐射场方法以多视角RGB视频和来自静态相机的背景图像作为输入,并基于已知相机参数,以在线方式重建预估的几何与外观标准模型的形变过程。由于该标准模型具有时间不变性,即使对于长时程、大范围的运动也能获得对应关系。我们采用神经场景表征来参数化方法的各个组件。与先前的动态神经辐射场方法类似,我们采用逆向形变模型。研究发现需对该模型进行重要调整以处理较大幅度运动:我们将形变分解为强正则化的粗粒度分量和弱正则化的细粒度分量,其中粗粒度分量还将形变场扩展至物体周围空间,从而实现跨时间跟踪。实验表明,与仅能处理小幅运动的现有工作不同,我们的方法能够实现演播室尺度运动的重建。
English
Existing methods for the 4D reconstruction of general, non-rigidly deforming
objects focus on novel-view synthesis and neglect correspondences. However,
time consistency enables advanced downstream tasks like 3D editing, motion
analysis, or virtual-asset creation. We propose SceNeRFlow to reconstruct a
general, non-rigid scene in a time-consistent manner. Our dynamic-NeRF method
takes multi-view RGB videos and background images from static cameras with
known camera parameters as input. It then reconstructs the deformations of an
estimated canonical model of the geometry and appearance in an online fashion.
Since this canonical model is time-invariant, we obtain correspondences even
for long-term, long-range motions. We employ neural scene representations to
parametrize the components of our method. Like prior dynamic-NeRF methods, we
use a backwards deformation model. We find non-trivial adaptations of this
model necessary to handle larger motions: We decompose the deformations into a
strongly regularized coarse component and a weakly regularized fine component,
where the coarse component also extends the deformation field into the space
surrounding the object, which enables tracking over time. We show
experimentally that, unlike prior work that only handles small motion, our
method enables the reconstruction of studio-scale motions.