Web上的城市:Web上大规模场景的实时神经渲染
City-on-Web: Real-time Neural Rendering of Large-scale Scenes on the Web
December 27, 2023
作者: Kaiwen Song, Juyong Zhang
cs.AI
摘要
NeRF已经极大地推进了3D场景重建,在各种环境中捕捉了复杂的细节。现有方法已成功利用辐射场烘焙来促进小场景的实时渲染。然而,当应用于大规模场景时,这些技术面临重大挑战,由于计算、内存和带宽资源有限,很难提供无缝的实时体验。在本文中,我们提出了City-on-Web,通过将整个场景分割成可管理的块,并为每个块设置适当的细节级别,确保高保真度、高效的内存管理和快速渲染。同时,我们精心设计了训练和推断过程,以确保Web上的最终渲染结果与训练一致。由于我们的新颖表示和精心设计的训练/推断过程,我们是首个在资源受限环境中实现大规模场景实时渲染的方法。大量实验结果表明,我们的方法促进了在Web平台上大规模场景的实时渲染,在RTX 3060 GPU上以1080P分辨率实现32FPS,同时达到了与最先进方法接近的质量水平。项目页面:https://ustc3dv.github.io/City-on-Web/
English
NeRF has significantly advanced 3D scene reconstruction, capturing intricate
details across various environments. Existing methods have successfully
leveraged radiance field baking to facilitate real-time rendering of small
scenes. However, when applied to large-scale scenes, these techniques encounter
significant challenges, struggling to provide a seamless real-time experience
due to limited resources in computation, memory, and bandwidth. In this paper,
we propose City-on-Web, which represents the whole scene by partitioning it
into manageable blocks, each with its own Level-of-Detail, ensuring high
fidelity, efficient memory management and fast rendering. Meanwhile, we
carefully design the training and inference process such that the final
rendering result on web is consistent with training. Thanks to our novel
representation and carefully designed training/inference process, we are the
first to achieve real-time rendering of large-scale scenes in
resource-constrained environments. Extensive experimental results demonstrate
that our method facilitates real-time rendering of large-scale scenes on a web
platform, achieving 32FPS at 1080P resolution with an RTX 3060 GPU, while
simultaneously achieving a quality that closely rivals that of state-of-the-art
methods. Project page: https://ustc3dv.github.io/City-on-Web/