CATS-V2V:面向复杂恶劣交通场景的真实世界车车协同感知数据集
CATS-V2V: A Real-World Vehicle-to-Vehicle Cooperative Perception Dataset with Complex Adverse Traffic Scenarios
November 14, 2025
作者: Hangyu Li, Bofeng Cao, Zhaohui Liang, Wuzhen Li, Juyoung Oh, Yuxuan Chen, Shixiao Liang, Hang Zhou, Chengyuan Ma, Jiaxi Liu, Zheng Li, Peng Zhang, KeKe Long, Maolin Liu, Jackson Jiang, Chunlei Yu, Shengxiang Liu, Hongkai Yu, Xiaopeng Li
cs.AI
摘要
车对车(V2V)协同感知技术通过克服复杂不利交通场景下的感知局限,在提升自动驾驶性能方面具有巨大潜力。数据作为现代自动驾驶人工智能的基础设施,却因严苛的采集条件导致现有数据集主要聚焦常规交通场景,制约了协同感知的效益。为应对这一挑战,我们推出业界首个面向复杂不利交通场景的实景V2V协同感知数据集CATS-V2V。该数据集由两辆硬件时间同步的车辆采集,覆盖10个不同地点的10类天气与光照条件,包含100段视频片段、6万帧10Hz激光雷达点云、126万张多视角30Hz相机图像,以及75万条匿名化高精度RTK固定解GNSS/IMU记录。我们同步提供了时序一致的物体3D边界框标注与静态场景数据,以构建4D鸟瞰图表征。基于此,我们提出基于目标的时序对齐方法,确保所有物体在多传感器模态间实现精准匹配。CATS-V2V作为迄今同类型数据集中规模最大、支持最全面、质量最高的资源,有望为自动驾驶领域的相关研究提供重要支撑。
English
Vehicle-to-Vehicle (V2V) cooperative perception has great potential to enhance autonomous driving performance by overcoming perception limitations in complex adverse traffic scenarios (CATS). Meanwhile, data serves as the fundamental infrastructure for modern autonomous driving AI. However, due to stringent data collection requirements, existing datasets focus primarily on ordinary traffic scenarios, constraining the benefits of cooperative perception. To address this challenge, we introduce CATS-V2V, the first-of-its-kind real-world dataset for V2V cooperative perception under complex adverse traffic scenarios. The dataset was collected by two hardware time-synchronized vehicles, covering 10 weather and lighting conditions across 10 diverse locations. The 100-clip dataset includes 60K frames of 10 Hz LiDAR point clouds and 1.26M multi-view 30 Hz camera images, along with 750K anonymized yet high-precision RTK-fixed GNSS and IMU records. Correspondingly, we provide time-consistent 3D bounding box annotations for objects, as well as static scenes to construct a 4D BEV representation. On this basis, we propose a target-based temporal alignment method, ensuring that all objects are precisely aligned across all sensor modalities. We hope that CATS-V2V, the largest-scale, most supportive, and highest-quality dataset of its kind to date, will benefit the autonomous driving community in related tasks.