ChatPaper.aiChatPaper

Scaffold-GS:結構化3D高斯函數用於視角自適應渲染

Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering

November 30, 2023
作者: Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, Bo Dai
cs.AI

摘要

神經渲染方法在各種學術和工業應用中顯著推進了逼真的3D場景渲染。最近的3D高斯濺射方法已經實現了最先進的渲染質量和速度,結合了基於基元的表示和體積表示的優勢。然而,這常常導致高度冗余的高斯函數,試圖擬合每個訓練視圖,忽略了底層場景幾何。因此,結果模型對於重要視角變化、無紋理區域和照明效果變得不夠穩健。我們介紹了Scaffold-GS,它使用錨點來分佈本地3D高斯函數,並根據視角和視圖截錄體內的距離即時預測它們的屬性。基於神經高斯函數的重要性,我們制定了錨點生長和修剪策略,以可靠地提高場景覆蓋率。我們展示了我們的方法有效地減少了冗余的高斯函數,同時提供高質量的渲染。我們還展示了增強的能力,以容納具有不同細節級別和視角依賴觀察的場景,而不會犧牲渲染速度。
English
Neural rendering methods have significantly advanced photo-realistic 3D scene rendering in various academic and industrial applications. The recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed combining the benefits of both primitive-based representations and volumetric representations. However, it often leads to heavily redundant Gaussians that try to fit every training view, neglecting the underlying scene geometry. Consequently, the resulting model becomes less robust to significant view changes, texture-less area and lighting effects. We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians, and predicts their attributes on-the-fly based on viewing direction and distance within the view frustum. Anchor growing and pruning strategies are developed based on the importance of neural Gaussians to reliably improve the scene coverage. We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering. We also demonstrates an enhanced capability to accommodate scenes with varying levels-of-detail and view-dependent observations, without sacrificing the rendering speed.
PDF121December 15, 2024