VideoRF:將動態輻射場渲染為2D特徵影片串流
VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams
December 3, 2023
作者: Liao Wang, Kaixin Yao, Chengcheng Guo, Zhirui Zhang, Qiang Hu, Jingyi Yu, Lan Xu, Minye Wu
cs.AI
摘要
神經輻射場(NeRFs)擅長逼真地渲染靜態場景。然而,在普及設備上渲染動態、長時間輻射場仍然具有挑戰性,這是由於數據存儲和計算限制所致。本文介紹了VideoRF,這是第一種在移動平台上實現動態輻射場的實時流式傳輸和渲染的方法。其核心是序列化的二維特徵圖像流,代表了全部的四維輻射場。我們引入了一個量身定制的訓練方案,直接應用於這個二維領域,以施加特徵圖像流的時間和空間冗餘性。通過利用冗餘性,我們展示了特徵圖像流可以被二維視頻編解碼器有效壓縮,這使我們能夠利用視頻硬件加速器實現實時解碼。另一方面,基於特徵圖像流,我們提出了一種新穎的VideoRF渲染流程,其中包括專門的空間映射,以有效地查詢輻射特性。搭配延遲着色模型,VideoRF由於其高效性,在移動設備上具有實時渲染的能力。我們開發了一個實時互動播放器,實現了動態場景的在線流式傳輸和渲染,為從桌面到手機等各種設備提供了無縫且身臨其境的自由視點體驗。
English
Neural Radiance Fields (NeRFs) excel in photorealistically rendering static
scenes. However, rendering dynamic, long-duration radiance fields on ubiquitous
devices remains challenging, due to data storage and computational constraints.
In this paper, we introduce VideoRF, the first approach to enable real-time
streaming and rendering of dynamic radiance fields on mobile platforms. At the
core is a serialized 2D feature image stream representing the 4D radiance field
all in one. We introduce a tailored training scheme directly applied to this 2D
domain to impose the temporal and spatial redundancy of the feature image
stream. By leveraging the redundancy, we show that the feature image stream can
be efficiently compressed by 2D video codecs, which allows us to exploit video
hardware accelerators to achieve real-time decoding. On the other hand, based
on the feature image stream, we propose a novel rendering pipeline for VideoRF,
which has specialized space mappings to query radiance properties efficiently.
Paired with a deferred shading model, VideoRF has the capability of real-time
rendering on mobile devices thanks to its efficiency. We have developed a
real-time interactive player that enables online streaming and rendering of
dynamic scenes, offering a seamless and immersive free-viewpoint experience
across a range of devices, from desktops to mobile phones.