ChatPaper.aiChatPaper

UrbanIR:从单个视频进行大规模城市场景反渲染

UrbanIR: Large-Scale Urban Scene Inverse Rendering from a Single Video

June 15, 2023
作者: Zhi-Hao Lin, Bohan Liu, Yi-Ting Chen, David Forsyth, Jia-Bin Huang, Anand Bhattad, Shenlong Wang
cs.AI

摘要

我们展示了如何构建一个模型,允许在视频中从新的光照条件下实现一个场景的逼真、自由视角渲染。我们的方法——UrbanIR:城市场景逆渲染——从视频中计算出一个逆图形表示。UrbanIR同时推断形状、反照率、可见性,以及太阳和天空照明,从一段未知光照的无边界室外场景的单个视频中。UrbanIR使用安装在汽车上的摄像头的视频(与典型的NeRF风格估计中同一点的多个视图相比)。因此,标准方法产生了较差的几何估计(例如,屋顶),并且存在许多“漂浮物”。逆图形推断中的错误可能导致强烈的渲染伪影。UrbanIR使用新颖的损失函数来控制这些和其他错误来源。UrbanIR使用一种新颖的损失函数,非常好地估计了原始场景中的阴影体积。由此产生的表示有助于可控编辑,提供了重照场景和插入对象的逼真自由视角渲染。定性评估表明,相较于最先进技术,UrbanIR取得了显著的改进。
English
We show how to build a model that allows realistic, free-viewpoint renderings of a scene under novel lighting conditions from video. Our method -- UrbanIR: Urban Scene Inverse Rendering -- computes an inverse graphics representation from the video. UrbanIR jointly infers shape, albedo, visibility, and sun and sky illumination from a single video of unbounded outdoor scenes with unknown lighting. UrbanIR uses videos from cameras mounted on cars (in contrast to many views of the same points in typical NeRF-style estimation). As a result, standard methods produce poor geometry estimates (for example, roofs), and there are numerous ''floaters''. Errors in inverse graphics inference can result in strong rendering artifacts. UrbanIR uses novel losses to control these and other sources of error. UrbanIR uses a novel loss to make very good estimates of shadow volumes in the original scene. The resulting representations facilitate controllable editing, delivering photorealistic free-viewpoint renderings of relit scenes and inserted objects. Qualitative evaluation demonstrates strong improvements over the state-of-the-art.
PDF50December 15, 2024