FastSR-NeRF:通过简单的超分辨率流程在消费设备上提高 NeRF 效率
FastSR-NeRF: Improving NeRF Efficiency on Consumer Devices with A Simple Super-Resolution Pipeline
December 15, 2023
作者: Chien-Yu Lin, Qichen Fu, Thomas Merth, Karren Yang, Anurag Ranjan
cs.AI
摘要
最近,超分辨率(SR)技术已被提出,用于提升神经辐射场(NeRF)的输出并生成具有增强推理速度的高质量图像。然而,现有的NeRF+SR方法通过使用额外的输入特征、损失函数和/或昂贵的训练程序(如知识蒸馏)增加了训练开销。本文旨在利用SR实现效率提升,而无需昂贵的训练或架构更改。具体而言,我们构建了一个简单的NeRF+SR流程,直接结合现有模块,并提出了一种轻量级的增强技术,即随机补丁采样,用于训练。与现有的NeRF+SR方法相比,我们的流程减少了SR计算开销,并且训练速度最多可提高23倍,使其能够在消费者设备(如苹果MacBook)上运行。实验证明,我们的流程可以将NeRF的输出提升2-4倍,同时保持高质量,在NVIDIA V100 GPU上推理速度最多提高18倍,在M1 Pro芯片上提高12.8倍。我们得出结论,SR可以是一种简单但有效的技术,用于提高NeRF模型在消费者设备上的效率。
English
Super-resolution (SR) techniques have recently been proposed to upscale the
outputs of neural radiance fields (NeRF) and generate high-quality images with
enhanced inference speeds. However, existing NeRF+SR methods increase training
overhead by using extra input features, loss functions, and/or expensive
training procedures such as knowledge distillation. In this paper, we aim to
leverage SR for efficiency gains without costly training or architectural
changes. Specifically, we build a simple NeRF+SR pipeline that directly
combines existing modules, and we propose a lightweight augmentation technique,
random patch sampling, for training. Compared to existing NeRF+SR methods, our
pipeline mitigates the SR computing overhead and can be trained up to 23x
faster, making it feasible to run on consumer devices such as the Apple
MacBook. Experiments show our pipeline can upscale NeRF outputs by 2-4x while
maintaining high quality, increasing inference speeds by up to 18x on an NVIDIA
V100 GPU and 12.8x on an M1 Pro chip. We conclude that SR can be a simple but
effective technique for improving the efficiency of NeRF models for consumer
devices.