在野外环境中实现鲁棒神经渲染:基于非对称双3D高斯溅射的方法
Robust Neural Rendering in the Wild with Asymmetric Dual 3D Gaussian Splatting
June 4, 2025
作者: Chengqi Li, Zhihao Shi, Yangdi Lu, Wenbo He, Xiangyu Xu
cs.AI
摘要
由于光照条件的不一致性和瞬态干扰物的存在,从野外图像进行三维重建仍是一项具有挑战性的任务。现有方法通常依赖启发式策略来处理低质量的训练数据,这些策略往往难以生成稳定且一致的重建结果,常导致视觉伪影的出现。在本研究中,我们提出了非对称双3DGS框架,该框架巧妙地利用了这些伪影的随机性特征:由于微小的随机性,它们在不同训练运行中会有所变化。具体而言,我们的方法并行训练两个3D高斯溅射(3DGS)模型,通过施加一致性约束,促使模型在可靠的场景几何上收敛,同时抑制不一致的伪影。为了防止两个模型因确认偏误而陷入相似的失败模式,我们引入了一种分叉掩码策略,应用两种互补的掩码:多线索自适应掩码和自监督软掩码,从而引导两个模型进行非对称训练,减少共享的错误模式。此外,为了提高模型训练的效率,我们提出了一种轻量级变体——动态EMA代理,该变体将其中一个模型替换为动态更新的指数移动平均(EMA)代理,并采用交替掩码策略以保持分叉性。在具有挑战性的真实世界数据集上进行的大量实验表明,我们的方法在保持高效率的同时,始终优于现有方法。代码及训练模型将予以公开。
English
3D reconstruction from in-the-wild images remains a challenging task due to
inconsistent lighting conditions and transient distractors. Existing methods
typically rely on heuristic strategies to handle the low-quality training data,
which often struggle to produce stable and consistent reconstructions,
frequently resulting in visual artifacts. In this work, we propose Asymmetric
Dual 3DGS, a novel framework that leverages the stochastic nature of these
artifacts: they tend to vary across different training runs due to minor
randomness. Specifically, our method trains two 3D Gaussian Splatting (3DGS)
models in parallel, enforcing a consistency constraint that encourages
convergence on reliable scene geometry while suppressing inconsistent
artifacts. To prevent the two models from collapsing into similar failure modes
due to confirmation bias, we introduce a divergent masking strategy that
applies two complementary masks: a multi-cue adaptive mask and a
self-supervised soft mask, which leads to an asymmetric training process of the
two models, reducing shared error modes. In addition, to improve the efficiency
of model training, we introduce a lightweight variant called Dynamic EMA Proxy,
which replaces one of the two models with a dynamically updated Exponential
Moving Average (EMA) proxy, and employs an alternating masking strategy to
preserve divergence. Extensive experiments on challenging real-world datasets
demonstrate that our method consistently outperforms existing approaches while
achieving high efficiency. Codes and trained models will be released.