Video2Game:从单个视频中实时、交互式、逼真且兼容浏览器的环境
Video2Game: Real-time, Interactive, Realistic and Browser-Compatible Environment from a Single Video
April 15, 2024
作者: Hongchi Xia, Zhi-Hao Lin, Wei-Chiu Ma, Shenlong Wang
cs.AI
摘要
创建高质量和互动性虚拟环境,如游戏和模拟器,通常涉及复杂且昂贵的手工建模过程。在本文中,我们提出了Video2Game,一种新方法,可以自动将真实场景的视频转换为逼真且互动的游戏环境。我们系统的核心包括三个主要组件:(i) 一个神经辐射场(NeRF)模块,有效捕捉场景的几何形状和视觉外观;(ii) 一个网格模块,从NeRF中提炼知识以加快渲染速度;以及(iii) 一个物理模块,对物体之间的相互作用和物理动态进行建模。通过遵循精心设计的流程,可以构建一个可交互和可操作的真实世界数字副本。我们在室内和大型室外场景上对我们的系统进行基准测试。我们展示了我们不仅可以实时生成高度逼真的渲染,还可以构建互动游戏。
English
Creating high-quality and interactive virtual environments, such as games and
simulators, often involves complex and costly manual modeling processes. In
this paper, we present Video2Game, a novel approach that automatically converts
videos of real-world scenes into realistic and interactive game environments.
At the heart of our system are three core components:(i) a neural radiance
fields (NeRF) module that effectively captures the geometry and visual
appearance of the scene; (ii) a mesh module that distills the knowledge from
NeRF for faster rendering; and (iii) a physics module that models the
interactions and physical dynamics among the objects. By following the
carefully designed pipeline, one can construct an interactable and actionable
digital replica of the real world. We benchmark our system on both indoor and
large-scale outdoor scenes. We show that we can not only produce
highly-realistic renderings in real-time, but also build interactive games on
top.Summary
AI-Generated Summary