VideoRF:将动态辐射场渲染为2D特征视频流
VideoRF: Rendering Dynamic Radiance Fields as 2D Feature Video Streams
December 3, 2023
作者: Liao Wang, Kaixin Yao, Chengcheng Guo, Zhirui Zhang, Qiang Hu, Jingyi Yu, Lan Xu, Minye Wu
cs.AI
摘要
神经辐射场(NeRFs)在逼真地渲染静态场景方面表现出色。然而,在普遍设备上渲染动态、长时间辐射场仍然具有挑战性,这是由于数据存储和计算约束所致。本文介绍了VideoRF,这是第一种在移动平台上实现动态辐射场实时流式传输和渲染的方法。其核心是一个序列化的二维特征图像流,代表了四维辐射场的全部内容。我们引入了一种定制的训练方案,直接应用于这个二维域,以施加特征图像流的时间和空间冗余。通过利用冗余,我们展示了特征图像流可以通过二维视频编解码器进行高效压缩,从而允许我们利用视频硬件加速器实现实时解码。另一方面,基于特征图像流,我们提出了VideoRF的新型渲染流程,其中包括专门的空间映射,以便高效查询辐射特性。配合延迟着色模型,VideoRF由于其高效性,具有在移动设备上实时渲染的能力。我们开发了一个实时互动播放器,实现了动态场景的在线流式传输和渲染,为用户提供了从台式电脑到手机等各种设备上无缝且沉浸式的自由视角体验。
English
Neural Radiance Fields (NeRFs) excel in photorealistically rendering static
scenes. However, rendering dynamic, long-duration radiance fields on ubiquitous
devices remains challenging, due to data storage and computational constraints.
In this paper, we introduce VideoRF, the first approach to enable real-time
streaming and rendering of dynamic radiance fields on mobile platforms. At the
core is a serialized 2D feature image stream representing the 4D radiance field
all in one. We introduce a tailored training scheme directly applied to this 2D
domain to impose the temporal and spatial redundancy of the feature image
stream. By leveraging the redundancy, we show that the feature image stream can
be efficiently compressed by 2D video codecs, which allows us to exploit video
hardware accelerators to achieve real-time decoding. On the other hand, based
on the feature image stream, we propose a novel rendering pipeline for VideoRF,
which has specialized space mappings to query radiance properties efficiently.
Paired with a deferred shading model, VideoRF has the capability of real-time
rendering on mobile devices thanks to its efficiency. We have developed a
real-time interactive player that enables online streaming and rendering of
dynamic scenes, offering a seamless and immersive free-viewpoint experience
across a range of devices, from desktops to mobile phones.