ChatPaper.aiChatPaper

FSGS:使用高斯塗抹進行實時少樣本視圖合成

FSGS: Real-Time Few-shot View Synthesis using Gaussian Splatting

December 1, 2023
作者: Zehao Zhu, Zhiwen Fan, Yifan Jiang, Zhangyang Wang
cs.AI

摘要

從性有限觀測中合成新視角仍然是一項重要且持久的任務。然而,現有基於 NeRF 的少樣本視角合成方法往往在高效率方面存在妥協,以獲得準確的三維表示。為應對這一挑戰,我們提出了一種基於三維高斯塗點的少樣本視角合成框架,可實現實時且照片逼真的視角合成,僅需三個訓練視角。所提出的方法名為 FSGS,通過精心設計的高斯展開過程處理極度稀疏的初始 SfM 點。我們的方法通過在最具代表性的位置周圍迭代分佈新的高斯函數,隨後填補空白區域中的局部細節。我們還在高斯優化過程中整合了大規模預訓練的單眼深度估計器,利用在線增強視角來引導幾何優化朝向最佳解決方案。從從有限輸入視點觀察到的稀疏點開始,我們的 FSGS 能夠準確擴展到未見區域,全面覆蓋場景並提升新視角的渲染質量。總的來說,FSGS 在各種數據集(包括 LLFF、Mip-NeRF360 和 Blender)上均實現了準確性和渲染效率的最新性能。項目網站:https://zehaozhu.github.io/FSGS/。
English
Novel view synthesis from limited observations remains an important and persistent task. However, high efficiency in existing NeRF-based few-shot view synthesis is often compromised to obtain an accurate 3D representation. To address this challenge, we propose a few-shot view synthesis framework based on 3D Gaussian Splatting that enables real-time and photo-realistic view synthesis with as few as three training views. The proposed method, dubbed FSGS, handles the extremely sparse initialized SfM points with a thoughtfully designed Gaussian Unpooling process. Our method iteratively distributes new Gaussians around the most representative locations, subsequently infilling local details in vacant areas. We also integrate a large-scale pre-trained monocular depth estimator within the Gaussians optimization process, leveraging online augmented views to guide the geometric optimization towards an optimal solution. Starting from sparse points observed from limited input viewpoints, our FSGS can accurately grow into unseen regions, comprehensively covering the scene and boosting the rendering quality of novel views. Overall, FSGS achieves state-of-the-art performance in both accuracy and rendering efficiency across diverse datasets, including LLFF, Mip-NeRF360, and Blender. Project website: https://zehaozhu.github.io/FSGS/.
PDF121December 15, 2024