ChatPaper.aiChatPaper

SceNeRFlow:一致性重建通用动态场景的时间

SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes

August 16, 2023
作者: Edith Tretschk, Vladislav Golyanik, Michael Zollhoefer, Aljaz Bozic, Christoph Lassner, Christian Theobalt
cs.AI

摘要

现有的用于对一般非刚性变形物体进行4D重建的方法侧重于新视角合成,忽略了对应关系。然而,时间一致性可以实现高级下游任务,如3D编辑、运动分析或虚拟资产创建。我们提出了SceNeRFlow来以一致的时间方式重建一般非刚性场景。我们的动态NeRF方法使用多视角RGB视频和静态摄像机拍摄的背景图像作为输入。然后,以在线方式重建几何和外观的估计规范模型的变形。由于这个规范模型是时间不变的,我们即使对于长期、长距离的运动也能获得对应关系。我们采用神经场景表示来参数化我们方法的组件。与先前的动态NeRF方法一样,我们使用了反向变形模型。我们发现这个模型的非平凡调整对于处理更大的运动是必要的:我们将变形分解为强正则化的粗糙组件和弱正则化的细致组件,其中粗糙组件还将变形场延伸到物体周围的空间,从而实现随时间的跟踪。我们通过实验证明,与仅处理小运动的先前工作不同,我们的方法能够重建大规模运动。
English
Existing methods for the 4D reconstruction of general, non-rigidly deforming objects focus on novel-view synthesis and neglect correspondences. However, time consistency enables advanced downstream tasks like 3D editing, motion analysis, or virtual-asset creation. We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner. Our dynamic-NeRF method takes multi-view RGB videos and background images from static cameras with known camera parameters as input. It then reconstructs the deformations of an estimated canonical model of the geometry and appearance in an online fashion. Since this canonical model is time-invariant, we obtain correspondences even for long-term, long-range motions. We employ neural scene representations to parametrize the components of our method. Like prior dynamic-NeRF methods, we use a backwards deformation model. We find non-trivial adaptations of this model necessary to handle larger motions: We decompose the deformations into a strongly regularized coarse component and a weakly regularized fine component, where the coarse component also extends the deformation field into the space surrounding the object, which enables tracking over time. We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.

Summary

AI-Generated Summary

PDF50December 15, 2024