ChatPaper.aiChatPaper

NeRF-XL:利用多个GPU扩展NeRF模型

NeRF-XL: Scaling NeRFs with Multiple GPUs

April 24, 2024
作者: Ruilong Li, Sanja Fidler, Angjoo Kanazawa, Francis Williams
cs.AI

摘要

我们提出了NeRF-XL,这是一种将神经辐射场(Neural Radiance Fields,NeRFs)分布在多个GPU上的原则性方法,从而实现对具有任意大容量的NeRFs进行训练和渲染。我们首先重新审视现有的多GPU方法,这些方法将大场景分解为多个独立训练的NeRFs,并确定了这些方法存在的几个根本问题,这些问题阻碍了随着额外的计算资源(GPU)在训练中的使用而改善重建质量。NeRF-XL解决了这些问题,通过简单地使用更多硬件,实现了对具有任意参数数量的NeRFs进行训练和渲染。我们方法的核心是一种新颖的分布式训练和渲染公式,从数学上等价于经典的单GPU情况,并最小化了GPU之间的通信。通过解锁具有任意大参数数量的NeRFs,我们的方法是首个揭示NeRFs多GPU扩展规律的方法,显示了随着更大参数数量的改善而提高的重建质量和随着更多GPU而加快的速度改进。我们在各种数据集上展示了NeRF-XL的有效性,包括迄今为止最大的开源数据集MatrixCity,其中包含258K张图像,覆盖了25平方公里的城市区域。
English
We present NeRF-XL, a principled method for distributing Neural Radiance Fields (NeRFs) across multiple GPUs, thus enabling the training and rendering of NeRFs with an arbitrarily large capacity. We begin by revisiting existing multi-GPU approaches, which decompose large scenes into multiple independently trained NeRFs, and identify several fundamental issues with these methods that hinder improvements in reconstruction quality as additional computational resources (GPUs) are used in training. NeRF-XL remedies these issues and enables the training and rendering of NeRFs with an arbitrary number of parameters by simply using more hardware. At the core of our method lies a novel distributed training and rendering formulation, which is mathematically equivalent to the classic single-GPU case and minimizes communication between GPUs. By unlocking NeRFs with arbitrarily large parameter counts, our approach is the first to reveal multi-GPU scaling laws for NeRFs, showing improvements in reconstruction quality with larger parameter counts and speed improvements with more GPUs. We demonstrate the effectiveness of NeRF-XL on a wide variety of datasets, including the largest open-source dataset to date, MatrixCity, containing 258K images covering a 25km^2 city area.

Summary

AI-Generated Summary

PDF151December 15, 2024