ChatPaper.aiChatPaper

Voyager:面向可探索3D场景生成的长程与全局一致视频扩散模型

Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation

June 4, 2025
作者: Tianyu Huang, Wangguandong Zheng, Tengfei Wang, Yuhao Liu, Zhenwei Wang, Junta Wu, Jie Jiang, Hui Li, Rynson W. H. Lau, Wangmeng Zuo, Chunchao Guo
cs.AI

摘要

在视频游戏和虚拟现实等实际应用中,常常需要构建用户能够沿自定义相机轨迹探索的三维场景。尽管从文本或图像生成三维物体已取得显著进展,但创建长距离、三维一致且可探索的三维场景仍是一个复杂且具挑战性的问题。在本研究中,我们提出了Voyager,一种新颖的视频扩散框架,它能够从单张图像出发,根据用户定义的相机路径生成世界一致的三维点云序列。与现有方法不同,Voyager实现了端到端的场景生成与重建,帧间具有内在一致性,无需依赖三维重建流程(如运动结构恢复或多视图立体匹配)。我们的方法融合了三大核心组件:1)世界一致视频扩散:一个统一架构,联合生成对齐的RGB与深度视频序列,基于现有世界观察确保全局一致性;2)长距离世界探索:配备点云剔除的高效世界缓存,以及平滑视频采样的自回归推理,用于迭代扩展场景并保持上下文感知的一致性;3)可扩展数据引擎:自动化相机姿态估计与任意视频的度量深度预测的视频重建流程,支持大规模、多样化的训练数据收集,无需手动三维标注。综合这些设计,Voyager在视觉质量与几何精度上较现有方法有明显提升,具有广泛的应用前景。
English
Real-world applications like video gaming and virtual reality often demand the ability to model 3D scenes that users can explore along custom camera trajectories. While significant progress has been made in generating 3D objects from text or images, creating long-range, 3D-consistent, explorable 3D scenes remains a complex and challenging problem. In this work, we present Voyager, a novel video diffusion framework that generates world-consistent 3D point-cloud sequences from a single image with user-defined camera path. Unlike existing approaches, Voyager achieves end-to-end scene generation and reconstruction with inherent consistency across frames, eliminating the need for 3D reconstruction pipelines (e.g., structure-from-motion or multi-view stereo). Our method integrates three key components: 1) World-Consistent Video Diffusion: A unified architecture that jointly generates aligned RGB and depth video sequences, conditioned on existing world observation to ensure global coherence 2) Long-Range World Exploration: An efficient world cache with point culling and an auto-regressive inference with smooth video sampling for iterative scene extension with context-aware consistency, and 3) Scalable Data Engine: A video reconstruction pipeline that automates camera pose estimation and metric depth prediction for arbitrary videos, enabling large-scale, diverse training data curation without manual 3D annotations. Collectively, these designs result in a clear improvement over existing methods in visual quality and geometric accuracy, with versatile applications.
PDF212June 5, 2025