City-on-Web:在網頁上對大規模場景進行實時神經渲染
City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web
December 27, 2023
作者: Kaiwen Song, Juyong Zhang
cs.AI
摘要
NeRF 已顯著推進了 3D 場景重建,在各種環境中捕捉了精細的細節。現有方法已成功利用輻射場烘焙來促進對小型場景的實時渲染。然而,當應用於大型場景時,這些技術遇到了重大挑戰,由於計算、內存和帶寬等資源有限,難以提供無縫的實時體驗。在本文中,我們提出了 City-on-Web,通過將整個場景劃分為可管理的塊,每個塊都具有自己的細節級別,確保高保真度、高效的內存管理和快速渲染。同時,我們精心設計了訓練和推斷過程,使網絡上的最終渲染結果與訓練一致。由於我們的新型表示和精心設計的訓練/推斷過程,我們是第一個在資源受限環境中實現大型場景實時渲染的方法。大量實驗結果表明,我們的方法促進了在網絡平台上對大型場景的實時渲染,在 RTX 3060 GPU 上以 1080P 分辨率實現了 32FPS,同時實現了與最先進方法接近的質量。項目頁面:https://ustc3dv.github.io/City-on-Web/
English
NeRF has significantly advanced 3D scene reconstruction, capturing intricate
details across various environments. Existing methods have successfully
leveraged radiance field baking to facilitate real-time rendering of small
scenes. However, when applied to large-scale scenes, these techniques encounter
significant challenges, struggling to provide a seamless real-time experience
due to limited resources in computation, memory, and bandwidth. In this paper,
we propose City-on-Web, which represents the whole scene by partitioning it
into manageable blocks, each with its own Level-of-Detail, ensuring high
fidelity, efficient memory management and fast rendering. Meanwhile, we
carefully design the training and inference process such that the final
rendering result on web is consistent with training. Thanks to our novel
representation and carefully designed training/inference process, we are the
first to achieve real-time rendering of large-scale scenes in
resource-constrained environments. Extensive experimental results demonstrate
that our method facilitates real-time rendering of large-scale scenes on a web
platform, achieving 32FPS at 1080P resolution with an RTX 3060 GPU, while
simultaneously achieving a quality that closely rivals that of state-of-the-art
methods. Project page: https://ustc3dv.github.io/City-on-Web/