ChatPaper.aiChatPaper

Vista4D:基于四维点云的视频重摄技术

Vista4D: Video Reshooting with 4D Point Clouds

April 23, 2026
作者: Kuan Heng Lin, Zhizheng Liu, Pablo Salamanca, Yash Kant, Ryan Burgert, Yuancheng Xu, Koichi Namekata, Yiwei Zhao, Bolei Zhou, Micah Goldblum, Paul Debevec, Ning Yu
cs.AI

摘要

我们提出Vista4D——一个基于4D点云实现输入视频与目标相机精准定位的鲁棒性视频重拍框架。具体而言,给定输入视频后,本方法能够从不同相机轨迹与视角重新合成具有相同动态特征的场景。现有视频重拍方法常受限于真实世界动态视频的深度估计伪影,难以保持内容外观一致性,也无法对具有挑战性的新轨迹实现精确相机控制。我们通过静态像素分割与4D重建构建了基于4D点云的场景表示,显式保留已观测内容并提供丰富相机信号,并利用重建的多视角动态数据进行训练,以提升真实场景推理时对点云伪影的鲁棒性。实验表明,在多种视频与相机路径下,相较现有先进基线方法,我们的方案在4D一致性、相机控制精度和视觉质量方面均有提升。此外,本方法可泛化应用于动态场景扩展与4D场景重组等现实场景。更多结果、代码与模型请参见项目页面:https://eyeline-labs.github.io/Vista4D
English
We present Vista4D, a robust and flexible video reshooting framework that grounds the input video and target cameras in a 4D point cloud. Specifically, given an input video, our method re-synthesizes the scene with the same dynamics from a different camera trajectory and viewpoint. Existing video reshooting methods often struggle with depth estimation artifacts of real-world dynamic videos, while also failing to preserve content appearance and failing to maintain precise camera control for challenging new trajectories. We build a 4D-grounded point cloud representation with static pixel segmentation and 4D reconstruction to explicitly preserve seen content and provide rich camera signals, and we train with reconstructed multiview dynamic data for robustness against point cloud artifacts during real-world inference. Our results demonstrate improved 4D consistency, camera control, and visual quality compared to state-of-the-art baselines under a variety of videos and camera paths. Moreover, our method generalizes to real-world applications such as dynamic scene expansion and 4D scene recomposition. See our project page for results, code, and models: https://eyeline-labs.github.io/Vista4D
PDF31April 25, 2026