RadarGen:基于摄像头的汽车雷达点云生成技术
RadarGen: Automotive Radar Point Cloud Generation from Cameras
December 19, 2025
作者: Tomer Borreda, Fangqiang Ding, Sanja Fidler, Shengyu Huang, Or Litany
cs.AI
摘要
我们提出RadarGen——一种基于多视角摄像头图像生成真实汽车雷达点云的扩散模型。该模型通过鸟瞰图形式表征雷达测量值(包含空间结构、雷达散射截面和 Doppler 属性),将高效的图像潜空间扩散技术适配到雷达领域。轻量级重建模块可从生成的特征图中恢复点云。为实现生成结果与视觉场景的精准对齐,RadarGen 融合了从预训练基础模型提取的 BEV 对齐深度、语义和运动线索,引导随机生成过程形成物理可信的雷达模式。基于图像的条件生成机制使该方法原则上能广泛兼容现有视觉数据集与仿真框架,为多模态生成式仿真提供了可扩展路径。大规模驾驶数据评估表明,RadarGen 能准确捕捉雷达测量值的特征分布,并缩小与真实数据训练的感知模型之间的差距,标志着跨传感模态统一生成式仿真迈出重要一步。
English
We present RadarGen, a diffusion model for synthesizing realistic automotive radar point clouds from multi-view camera imagery. RadarGen adapts efficient image-latent diffusion to the radar domain by representing radar measurements in bird's-eye-view form that encodes spatial structure together with radar cross section (RCS) and Doppler attributes. A lightweight recovery step reconstructs point clouds from the generated maps. To better align generation with the visual scene, RadarGen incorporates BEV-aligned depth, semantic, and motion cues extracted from pretrained foundation models, which guide the stochastic generation process toward physically plausible radar patterns. Conditioning on images makes the approach broadly compatible, in principle, with existing visual datasets and simulation frameworks, offering a scalable direction for multimodal generative simulation. Evaluations on large-scale driving data show that RadarGen captures characteristic radar measurement distributions and reduces the gap to perception models trained on real data, marking a step toward unified generative simulation across sensing modalities.