FastSR-NeRF:透過簡單的超解析流程在消費者設備上提升 NeRF 效率
FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline
December 15, 2023
作者: Chien-Yu Lin, Qichen Fu, Thomas Merth, Karren Yang, Anurag Ranjan
cs.AI
摘要
最近提出了超分辨率(SR)技術,用於提升神經輻射場(NeRF)的輸出並生成具有增強推論速度的高質量圖像。然而,現有的NeRF+SR方法通過使用額外的輸入特徵、損失函數和/或昂貴的訓練程序(如知識蒸餾)增加了訓練開銷。本文旨在利用SR實現效率提升,而無需昂貴的訓練或架構更改。具體而言,我們構建了一個簡單的NeRF+SR流程,直接結合現有模塊,並提出了一種輕量級的增強技術,即隨機補丁採樣,用於訓練。與現有的NeRF+SR方法相比,我們的流程減輕了SR計算開銷,訓練速度最多可提高23倍,使其能夠在蘋果MacBook等消費者設備上運行。實驗表明,我們的流程可以將NeRF輸出放大2-4倍,同時保持高質量,在NVIDIA V100 GPU上推理速度最多提高18倍,在M1 Pro芯片上提高12.8倍。我們得出結論,SR可以是一種簡單但有效的技術,用於提高NeRF模型在消費者設備上的效率。
English
Super-resolution (SR) techniques have recently been proposed to upscale the
outputs of neural radiance fields (NeRF) and generate high-quality images with
enhanced inference speeds. However, existing NeRF+SR methods increase training
overhead by using extra input features, loss functions, and/or expensive
training procedures such as knowledge distillation. In this paper, we aim to
leverage SR for efficiency gains without costly training or architectural
changes. Specifically, we build a simple NeRF+SR pipeline that directly
combines existing modules, and we propose a lightweight augmentation technique,
random patch sampling, for training. Compared to existing NeRF+SR methods, our
pipeline mitigates the SR computing overhead and can be trained up to 23x
faster, making it feasible to run on consumer devices such as the Apple
MacBook. Experiments show our pipeline can upscale NeRF outputs by 2-4x while
maintaining high quality, increasing inference speeds by up to 18x on an NVIDIA
V100 GPU and 12.8x on an M1 Pro chip. We conclude that SR can be a simple but
effective technique for improving the efficiency of NeRF models for consumer
devices.