ChatPaper.aiChatPaper

Scaffold-GS:用于视角自适应渲染的结构化3D高斯函数

Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering

November 30, 2023
作者: Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, Bo Dai
cs.AI

摘要

神经渲染方法在各种学术和工业应用中显著推进了逼真的3D场景渲染。最近的3D高斯飘零方法实现了最先进的渲染质量和速度,结合了基于基元表示和体积表示的优点。然而,它经常导致严重冗余的高斯函数,试图适应每个训练视图,忽视了底层场景几何。因此,结果模型对重要视角变化、无纹理区域和光照效果变得不够健壮。我们引入了Scaffold-GS,它使用锚点来分布局部3D高斯函数,并根据视角和视锥体内的距离实时预测它们的属性。基于神经高斯函数的重要性,我们开发了锚点生长和修剪策略,可靠地改善场景覆盖范围。我们展示了我们的方法有效减少了冗余高斯函数,同时提供了高质量的渲染。我们还展示了增强的能力,能够适应具有不同细节级别和视角相关观察的场景,而不会牺牲渲染速度。
English
Neural rendering methods have significantly advanced photo-realistic 3D scene rendering in various academic and industrial applications. The recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed combining the benefits of both primitive-based representations and volumetric representations. However, it often leads to heavily redundant Gaussians that try to fit every training view, neglecting the underlying scene geometry. Consequently, the resulting model becomes less robust to significant view changes, texture-less area and lighting effects. We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians, and predicts their attributes on-the-fly based on viewing direction and distance within the view frustum. Anchor growing and pruning strategies are developed based on the importance of neural Gaussians to reliably improve the scene coverage. We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering. We also demonstrates an enhanced capability to accommodate scenes with varying levels-of-detail and view-dependent observations, without sacrificing the rendering speed.
PDF121December 15, 2024