ChatPaper.aiChatPaper

AutoWeather4D:基于G-缓冲双通道编辑的自动驾驶视频天气转换技术

AutoWeather4D: Autonomous Driving Video Weather Conversion via G-Buffer Dual-Pass Editing

March 27, 2026
作者: Tianyu Liu, Weitao Xiong, Kunming Luo, Manyuan Zhang, Peng Liu, Yuan Liu, Ping Tan
cs.AI

摘要

生成式视频模型虽显著推动了自动驾驶恶劣天气场景的光写实合成,但其始终依赖海量数据集来学习罕见天气模式。基于3D感知的编辑方法通过增强现有视频素材缓解了数据约束,却受限于耗时的逐场景优化瓶颈,并存在固有的几何与光照纠缠问题。本研究提出AutoWeather4D——一种前馈式3D感知天气编辑框架,旨在显式解耦几何与光照要素。该框架核心为G缓冲双通道编辑机制:几何通道利用显式结构基础实现表面锚定的物理交互,光照通道通过解析光传输将局部光源贡献累积至全局光照,从而实现动态三维局部重照明。大量实验表明,AutoWeather4D在实现与生成式基线相当的光写实度与结构一致性的同时,支持细粒度参数化物理控制,可作为自动驾驶的高效数据引擎。
English
Generative video models have significantly advanced the photorealistic synthesis of adverse weather for autonomous driving; however, they consistently demand massive datasets to learn rare weather scenarios. While 3D-aware editing methods alleviate these data constraints by augmenting existing video footage, they are fundamentally bottlenecked by costly per-scene optimization and suffer from inherent geometric and illumination entanglement. In this work, we introduce AutoWeather4D, a feed-forward 3D-aware weather editing framework designed to explicitly decouple geometry and illumination. At the core of our approach is a G-buffer Dual-pass Editing mechanism. The Geometry Pass leverages explicit structural foundations to enable surface-anchored physical interactions, while the Light Pass analytically resolves light transport, accumulating the contributions of local illuminants into the global illumination to enable dynamic 3D local relighting. Extensive experiments demonstrate that AutoWeather4D achieves comparable photorealism and structural consistency to generative baselines while enabling fine-grained parametric physical control, serving as a practical data engine for autonomous driving.
PDF51April 2, 2026