ChatPaper.aiChatPaper

非受控影片的快速視角合成

Fast View Synthesis of Casual Videos

December 4, 2023
作者: Yao-Chih Lee, Zhoutong Zhang, Kevin Blackburn-Matzen, Simon Niklaus, Jianming Zhang, Jia-Bin Huang, Feng Liu
cs.AI

摘要

從野外視頻中進行新視角合成是困難的,因為存在著場景動態和視差不足等挑戰。儘管現有方法在使用隱式神經輻射場方面取得了令人期待的結果,但訓練和渲染速度較慢。本文重新審視明確的視頻表示,以高效地從單眼視頻中合成高質量的新視角。我們將靜態和動態視頻內容分開處理。具體來說,我們使用擴展的基於平面的場景表示來構建全局靜態場景模型,以合成具有時間一致性的新視頻。我們的基於平面的場景表示使用球面調和和位移地圖進行擴充,以捕捉視角相依效應並模擬非平面複雜表面幾何。我們選擇將動態內容表示為每幀點雲以提高效率。儘管這種表示容易出現不一致性,但由於運動,輕微的時間不一致性在感知上被掩蓋。我們開發了一種快速估算這種混合視頻表示並實時渲染新視角的方法。我們的實驗表明,我們的方法可以從野外視頻中合成高質量的新視角,其質量與最先進的方法相當,同時訓練速度快100倍,實現實時渲染。
English
Novel view synthesis from an in-the-wild video is difficult due to challenges like scene dynamics and lack of parallax. While existing methods have shown promising results with implicit neural radiance fields, they are slow to train and render. This paper revisits explicit video representations to synthesize high-quality novel views from a monocular video efficiently. We treat static and dynamic video content separately. Specifically, we build a global static scene model using an extended plane-based scene representation to synthesize temporally coherent novel video. Our plane-based scene representation is augmented with spherical harmonics and displacement maps to capture view-dependent effects and model non-planar complex surface geometry. We opt to represent the dynamic content as per-frame point clouds for efficiency. While such representations are inconsistency-prone, minor temporal inconsistencies are perceptually masked due to motion. We develop a method to quickly estimate such a hybrid video representation and render novel views in real time. Our experiments show that our method can render high-quality novel views from an in-the-wild video with comparable quality to state-of-the-art methods while being 100x faster in training and enabling real-time rendering.
PDF111December 15, 2024