ChatPaper.aiChatPaper

MeshSplatting:基於不透明網格的可微分渲染

MeshSplatting: Differentiable Rendering with Opaque Meshes

December 7, 2025
作者: Jan Held, Sanghyun Son, Renaud Vandeghen, Daniel Rebain, Matheus Gadelha, Yi Zhou, Anthony Cioppa, Ming C. Lin, Marc Van Droogenbroeck, Andrea Tagliasacchi
cs.AI

摘要

基於圖元化的濺射方法(如3D高斯濺射)憑藉即時渲染能力革新了新視角合成技術。然而,其基於點雲的表示法仍無法兼容驅動AR/VR和遊戲引擎的網格管線。我們提出MeshSplatting——一種基於網格的重建方法,通過可微分渲染聯合優化幾何結構與外觀。藉由受限德勞內三角剖分強制保持連接性,並精修表面一致性,該方法能創建端到端平滑、視覺高品質的網格,在即時3D引擎中實現高效渲染。在Mip-NeRF360數據集上,本方法將基於網格的新視角合成現有最先進技術MiLo的PSNR指標提升0.69 dB,同時訓練速度加快2倍且記憶體消耗減少一半,成功銜接神經渲染與互動式3D圖形技術,實現無縫的即時場景互動。專案頁面請見:https://meshsplatting.github.io/。
English
Primitive-based splatting methods like 3D Gaussian Splatting have revolutionized novel view synthesis with real-time rendering. However, their point-based representations remain incompatible with mesh-based pipelines that power AR/VR and game engines. We present MeshSplatting, a mesh-based reconstruction approach that jointly optimizes geometry and appearance through differentiable rendering. By enforcing connectivity via restricted Delaunay triangulation and refining surface consistency, MeshSplatting creates end-to-end smooth, visually high-quality meshes that render efficiently in real-time 3D engines. On Mip-NeRF360, it boosts PSNR by +0.69 dB over the current state-of-the-art MiLo for mesh-based novel view synthesis, while training 2x faster and using 2x less memory, bridging neural rendering and interactive 3D graphics for seamless real-time scene interaction. The project page is available at https://meshsplatting.github.io/.
PDF102December 17, 2025