ChatPaper.aiChatPaper

UrbanIR:從單一視頻中進行大規模城市場景反渲染

UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video

June 15, 2023
作者: Zhi-Hao Lin, Bohan Liu, Yi-Ting Chen, David Forsyth, Jia-Bin Huang, Anand Bhattad, Shenlong Wang
cs.AI

摘要

我們展示了如何建立一個模型,允許從視頻中在新的照明條件下實現場景的真實、自由視角渲染。我們的方法——UrbanIR:城市場景逆向渲染——從視頻中計算逆向圖形表示。UrbanIR從單個未知照明的無邊界戶外場景視頻中聯合推斷形狀、反照率、可見性,以及太陽和天空照明。UrbanIR使用安裝在汽車上的攝像頭的視頻(與典型NeRF風格估計中同一點的多個視圖相比)。因此,標準方法產生了較差的幾何估計(例如,屋頂),並且存在許多''浮游物''。逆向圖形推斷中的錯誤可能導致強烈的渲染異常。UrbanIR使用新穎的損失來控制這些和其他錯誤來源。UrbanIR使用一種新穎的損失來對原始場景中的陰影體積進行非常好的估計。由此產生的表示有助於可控編輯,提供重照場景和插入物體的逼真自由視角渲染。定性評估表明,相對於最先進技術,有著明顯的改進。
English
We show how to build a model that allows realistic, free-viewpoint renderings of a scene under novel lighting conditions from video. Our method -- UrbanIR: Urban Scene Inverse Rendering -- computes an inverse graphics representation from the video. UrbanIR jointly infers shape, albedo, visibility, and sun and sky illumination from a single video of unbounded outdoor scenes with unknown lighting. UrbanIR uses videos from cameras mounted on cars (in contrast to many views of the same points in typical NeRF-style estimation). As a result, standard methods produce poor geometry estimates (for example, roofs), and there are numerous ''floaters''. Errors in inverse graphics inference can result in strong rendering artifacts. UrbanIR uses novel losses to control these and other sources of error. UrbanIR uses a novel loss to make very good estimates of shadow volumes in the original scene. The resulting representations facilitate controllable editing, delivering photorealistic free-viewpoint renderings of relit scenes and inserted objects. Qualitative evaluation demonstrates strong improvements over the state-of-the-art.
PDF50December 15, 2024