神经场景时间线
Neural Scene Chronology
June 13, 2023
作者: Haotong Lin, Qianqian Wang, Ruojin Cai, Sida Peng, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely
cs.AI
摘要
在这项工作中,我们旨在从大型地标的互联网照片重建一个能够以独立控制视点、照明和时间的方式呈现逼真照片的时变3D模型。核心挑战有两个。首先,不同类型的时间变化,如照明和场景本身的变化(比如用另一幅涂鸦作品替换一幅),在图像中交织在一起。其次,场景级的时间变化通常是离散且零星地发生,而非连续的。为了解决这些问题,我们提出了一种新的场景表示,配备了一种新颖的时间阶跃函数编码方法,可以将离散的场景级内容变化建模为随时间分段恒定的函数。具体而言,我们将场景表示为一个带有每个图像照明嵌入的时空辐射场,其中通过一组学习到的阶跃函数来编码随时间变化的场景变化。为了促进我们从互联网图像中进行年代重建的任务,我们还收集了一个展示随时间发生各种变化的四个场景的新数据集。我们展示了我们的方法在这个数据集上展现出最先进的视图合成结果,同时实现了对视点、时间和照明的独立控制。
English
In this work, we aim to reconstruct a time-varying 3D model, capable of
rendering photo-realistic renderings with independent control of viewpoint,
illumination, and time, from Internet photos of large-scale landmarks. The core
challenges are twofold. First, different types of temporal changes, such as
illumination and changes to the underlying scene itself (such as replacing one
graffiti artwork with another) are entangled together in the imagery. Second,
scene-level temporal changes are often discrete and sporadic over time, rather
than continuous. To tackle these problems, we propose a new scene
representation equipped with a novel temporal step function encoding method
that can model discrete scene-level content changes as piece-wise constant
functions over time. Specifically, we represent the scene as a space-time
radiance field with a per-image illumination embedding, where
temporally-varying scene changes are encoded using a set of learned step
functions. To facilitate our task of chronology reconstruction from Internet
imagery, we also collect a new dataset of four scenes that exhibit various
changes over time. We demonstrate that our method exhibits state-of-the-art
view synthesis results on this dataset, while achieving independent control of
viewpoint, time, and illumination.