ChatPaper.aiChatPaper

Video2Game:從單一影片中實時、互動、逼真且瀏覽器兼容的環境

Video2Game: Real-time, Interactive, Realistic and Browser-Compatible Environment from a Single Video

April 15, 2024
作者: Hongchi Xia, Zhi-Hao Lin, Wei-Chiu Ma, Shenlong Wang
cs.AI

摘要

創建高質量且互動性強的虛擬環境,如遊戲和模擬器,通常涉及複雜且昂貴的手動建模過程。在本文中,我們提出了Video2Game,一種新方法,可以將真實世界場景的影片自動轉換為逼真且互動性強的遊戲環境。我們系統的核心包括三個主要組件:(i) 一個神經輻射場(NeRF)模塊,有效捕捉場景的幾何形狀和視覺外觀;(ii) 一個網格模塊,從NeRF中提煉知識以加快渲染速度;以及(iii) 一個物理模塊,模擬物體之間的交互作用和物理動態。通過遵循精心設計的流程,可以構建一個可互動且可操作的真實世界數位副本。我們在室內和大型戶外場景上對我們的系統進行基準測試。我們展示了我們不僅可以實時生成高度逼真的渲染,還可以在其上構建互動遊戲。
English
Creating high-quality and interactive virtual environments, such as games and simulators, often involves complex and costly manual modeling processes. In this paper, we present Video2Game, a novel approach that automatically converts videos of real-world scenes into realistic and interactive game environments. At the heart of our system are three core components:(i) a neural radiance fields (NeRF) module that effectively captures the geometry and visual appearance of the scene; (ii) a mesh module that distills the knowledge from NeRF for faster rendering; and (iii) a physics module that models the interactions and physical dynamics among the objects. By following the carefully designed pipeline, one can construct an interactable and actionable digital replica of the real world. We benchmark our system on both indoor and large-scale outdoor scenes. We show that we can not only produce highly-realistic renderings in real-time, but also build interactive games on top.

Summary

AI-Generated Summary

PDF312December 15, 2024