WildRayZer:动态环境中自监督的大视角合成
WildRayZer: Self-supervised Large View Synthesis in Dynamic Environments
January 15, 2026
作者: Xuweiyi Chen, Wentao Zhou, Zezhou Cheng
cs.AI
摘要
我们提出WildRayZer——一种用于动态环境(摄像机与物体均可移动)中新颖视角合成的自监督框架。动态内容会破坏静态NVS模型依赖的多视角一致性,导致重影、几何失真和姿态估计不稳定。WildRayZer通过执行分析-合成测试解决该问题:仅考虑相机运动的静态渲染器解析刚性结构,其残差揭示瞬变区域。基于这些残差,我们构建伪运动掩码、蒸馏出运动估计器,并利用其掩码输入标记及门控损失梯度,使监督聚焦于跨视角背景补全。为支持大规模训练与评估,我们构建了Dynamic RealEstate10K(D-RE10K)——包含1.5万条日常动态场景的真实数据集,以及D-RE10K-iPhone——专为稀疏视角瞬变感知NVS设计的成对瞬变/干净基准测试集。实验表明,WildRayZer在单次前向传播中,无论是瞬变区域消除还是全帧NVS质量,均持续优于基于优化的前馈基线方法。
English
We present WildRayZer, a self-supervised framework for novel view synthesis (NVS) in dynamic environments where both the camera and objects move. Dynamic content breaks the multi-view consistency that static NVS models rely on, leading to ghosting, hallucinated geometry, and unstable pose estimation. WildRayZer addresses this by performing an analysis-by-synthesis test: a camera-only static renderer explains rigid structure, and its residuals reveal transient regions. From these residuals, we construct pseudo motion masks, distill a motion estimator, and use it to mask input tokens and gate loss gradients so supervision focuses on cross-view background completion. To enable large-scale training and evaluation, we curate Dynamic RealEstate10K (D-RE10K), a real-world dataset of 15K casually captured dynamic sequences, and D-RE10K-iPhone, a paired transient and clean benchmark for sparse-view transient-aware NVS. Experiments show that WildRayZer consistently outperforms optimization-based and feed-forward baselines in both transient-region removal and full-frame NVS quality with a single feed-forward pass.