ChatPaper.aiChatPaper

神經場景時間順序

Neural Scene Chronology

June 13, 2023
作者: Haotong Lin, Qianqian Wang, Ruojin Cai, Sida Peng, Hadar Averbuch-Elor, Xiaowei Zhou, Noah Snavely
cs.AI

摘要

本研究旨在從大型地標的網絡照片重建一個具有時間變化的3D模型,能夠以獨立控制視點、照明和時間的方式呈現逼真的渲染效果。核心挑戰有兩個。首先,不同類型的時間變化,如照明和基礎場景本身的變化(例如將一幅塗鴉換成另一幅),在影像中交織在一起。其次,場景級的時間變化通常是離散的且隨時間零星發生,而非連續的。為應對這些問題,我們提出了一種新的場景表示,配備了一種新穎的時間步函數編碼方法,可以將離散的場景級內容變化建模為隨時間分段恆定的函數。具體來說,我們將場景表示為一個帶有每幅圖像照明嵌入的時空輻射場,其中通過一組學習到的步函數來編碼隨時間變化的場景變化。為了從網絡圖像中重建時間順序,我們還收集了一個包含四個場景的新數據集,展示了我們的方法在該數據集上展現出最先進的視角合成結果,同時實現了對視角、時間和照明的獨立控制。
English
In this work, we aim to reconstruct a time-varying 3D model, capable of rendering photo-realistic renderings with independent control of viewpoint, illumination, and time, from Internet photos of large-scale landmarks. The core challenges are twofold. First, different types of temporal changes, such as illumination and changes to the underlying scene itself (such as replacing one graffiti artwork with another) are entangled together in the imagery. Second, scene-level temporal changes are often discrete and sporadic over time, rather than continuous. To tackle these problems, we propose a new scene representation equipped with a novel temporal step function encoding method that can model discrete scene-level content changes as piece-wise constant functions over time. Specifically, we represent the scene as a space-time radiance field with a per-image illumination embedding, where temporally-varying scene changes are encoded using a set of learned step functions. To facilitate our task of chronology reconstruction from Internet imagery, we also collect a new dataset of four scenes that exhibit various changes over time. We demonstrate that our method exhibits state-of-the-art view synthesis results on this dataset, while achieving independent control of viewpoint, time, and illumination.
PDF60December 15, 2024