SpatialVID:一个带有空间标注的大规模视频数据集
SpatialVID: A Large-Scale Video Dataset with Spatial Annotations
September 11, 2025
作者: Jiahao Wang, Yufeng Yuan, Rujie Zheng, Youtian Lin, Jian Gao, Lin-Zhuo Chen, Yajie Bao, Yi Zhang, Chang Zeng, Yanxi Zhou, Xiaoxiao Long, Hao Zhu, Zhaoxiang Zhang, Xun Cao, Yao Yao
cs.AI
摘要
在空间智能领域,包括空间重建与世界探索两方面,已取得显著进展。然而,当前模型的可扩展性和现实世界保真度仍因大规模、高质量训练数据的匮乏而受到严重限制。尽管已有若干数据集提供了相机姿态信息,但这些数据集在规模、多样性及标注丰富度上普遍受限,尤其是针对具有真实相机运动的现实世界动态场景。为此,我们收集了SpatialVID数据集,该数据集包含大量野外拍摄的视频,涵盖多样场景、相机运动,并附有密集的三维标注,如逐帧相机姿态、深度及运动指令。具体而言,我们采集了超过21,000小时的原始视频,通过层级过滤流程处理成270万段视频片段,总计7,089小时的动态内容。后续的标注流程进一步丰富了这些片段,提供了包括相机姿态、深度图、动态掩码、结构化描述及序列化运动指令在内的详细空间与语义信息。对SpatialVID数据统计的分析显示,其丰富性与多样性直接促进了模型泛化能力与性能的提升,使其成为视频与三维视觉研究领域的重要资源。
English
Significant progress has been made in spatial intelligence, spanning both
spatial reconstruction and world exploration. However, the scalability and
real-world fidelity of current models remain severely constrained by the
scarcity of large-scale, high-quality training data. While several datasets
provide camera pose information, they are typically limited in scale,
diversity, and annotation richness, particularly for real-world dynamic scenes
with ground-truth camera motion. To this end, we collect SpatialVID, a
dataset consists of a large corpus of in-the-wild videos with diverse scenes,
camera movements and dense 3D annotations such as per-frame camera poses,
depth, and motion instructions. Specifically, we collect more than 21,000 hours
of raw video, and process them into 2.7 million clips through a hierarchical
filtering pipeline, totaling 7,089 hours of dynamic content. A subsequent
annotation pipeline enriches these clips with detailed spatial and semantic
information, including camera poses, depth maps, dynamic masks, structured
captions, and serialized motion instructions. Analysis of SpatialVID's data
statistics reveals a richness and diversity that directly foster improved model
generalization and performance, establishing it as a key asset for the video
and 3D vision research community.