NeRF-XL:利用多個 GPU 擴展 NeRFs
NeRF-XL: Scaling NeRFs with Multiple GPUs
April 24, 2024
作者: Ruilong Li, Sanja Fidler, Angjoo Kanazawa, Francis Williams
cs.AI
摘要
我們提出了 NeRF-XL,這是一種合理的方法,用於在多個 GPU 上分佈神經輻射場(NeRFs),從而實現對具有任意大容量的 NeRFs 進行訓練和渲染。我們首先重新檢視現有的多 GPU 方法,將大型場景分解為多個獨立訓練的 NeRFs,並識別出這些方法中的幾個基本問題,這些問題阻礙了隨著額外的計算資源(GPU)在訓練中的使用而提高重建質量。NeRF-XL 解決了這些問題,通過簡單地使用更多硬體,實現了對具有任意參數數量的 NeRFs 進行訓練和渲染。我們方法的核心是一種新穎的分佈式訓練和渲染公式,從數學上等價於經典的單 GPU 情況,並最小化 GPU 之間的通信。通過解鎖具有任意大參數數量的 NeRFs,我們的方法是首個揭示 NeRFs 的多 GPU 擴展規律,顯示隨著更大參數數量的改進重建質量以及隨著更多 GPU 的速度改進。我們在各種數據集上展示了 NeRF-XL 的有效性,包括迄今為止最大的開源數據集 MatrixCity,其中包含 25 平方公里城市區域的 258K 張圖像。
English
We present NeRF-XL, a principled method for distributing Neural Radiance
Fields (NeRFs) across multiple GPUs, thus enabling the training and rendering
of NeRFs with an arbitrarily large capacity. We begin by revisiting existing
multi-GPU approaches, which decompose large scenes into multiple independently
trained NeRFs, and identify several fundamental issues with these methods that
hinder improvements in reconstruction quality as additional computational
resources (GPUs) are used in training. NeRF-XL remedies these issues and
enables the training and rendering of NeRFs with an arbitrary number of
parameters by simply using more hardware. At the core of our method lies a
novel distributed training and rendering formulation, which is mathematically
equivalent to the classic single-GPU case and minimizes communication between
GPUs. By unlocking NeRFs with arbitrarily large parameter counts, our approach
is the first to reveal multi-GPU scaling laws for NeRFs, showing improvements
in reconstruction quality with larger parameter counts and speed improvements
with more GPUs. We demonstrate the effectiveness of NeRF-XL on a wide variety
of datasets, including the largest open-source dataset to date, MatrixCity,
containing 258K images covering a 25km^2 city area.Summary
AI-Generated Summary