Instant4D:四維高斯濺射技術的快速實現
Instant4D: 4D Gaussian Splatting in Minutes
October 1, 2025
作者: Zhanpeng Luo, Haoxi Ran, Li Lu
cs.AI
摘要
动态视图合成技术已取得显著进展,然而,由于优化过程缓慢及参数估计复杂,从未经校准的随意视频中重建场景仍具挑战性。本研究提出Instant4D,一种单目重建系统,它利用原生4D表示法,能在数分钟内高效处理随意视频序列,无需校准相机或深度传感器。我们的方法始于通过深度视觉SLAM进行几何恢复,随后通过网格剪枝优化场景表示。此设计在保持几何完整性的同时,显著减少了冗余,将模型大小缩减至原尺寸的10%以下。为高效处理时间动态性,我们引入了一种简化的4D高斯表示法,实现了30倍的加速,并将训练时间控制在两分钟以内,同时在多个基准测试中保持了竞争力。我们的方法在Dycheck数据集上或针对典型的200帧视频,能在10分钟内完成单视频重建。我们进一步将该模型应用于野外视频,展示了其广泛适用性。项目网站发布于https://instant4d.github.io/。
English
Dynamic view synthesis has seen significant advances, yet reconstructing
scenes from uncalibrated, casual video remains challenging due to slow
optimization and complex parameter estimation. In this work, we present
Instant4D, a monocular reconstruction system that leverages native 4D
representation to efficiently process casual video sequences within minutes,
without calibrated cameras or depth sensors. Our method begins with geometric
recovery through deep visual SLAM, followed by grid pruning to optimize scene
representation. Our design significantly reduces redundancy while maintaining
geometric integrity, cutting model size to under 10% of its original footprint.
To handle temporal dynamics efficiently, we introduce a streamlined 4D Gaussian
representation, achieving a 30x speed-up and reducing training time to within
two minutes, while maintaining competitive performance across several
benchmarks. Our method reconstruct a single video within 10 minutes on the
Dycheck dataset or for a typical 200-frame video. We further apply our model to
in-the-wild videos, showcasing its generalizability. Our project website is
published at https://instant4d.github.io/.