ReLi3D:基于解耦光照的可重照明多视角三维重建
ReLi3D: Relightable Multi-view 3D Reconstruction with Disentangled Illumination
March 20, 2026
作者: Jan-Niklas Dihlmann, Mark Boss, Simon Donne, Andreas Engelhardt, Hendrik P. A. Lensch, Varun Jampani
cs.AI
摘要
长期以来,从图像重建3D资源需要分别处理几何重建、材质估算和光照还原的独立流程,每个流程都存在各自的局限性且需要额外计算开销。我们提出ReLi3D——首个统一端到端流程,能够在一秒内从稀疏多视角图像中同步重建完整3D几何结构、基于物理的空间变化材质及环境光照。我们的核心发现是:多视角约束能显著提升材质与光照的解耦效果,而这对于单图像方法始终是本质上的不适定问题。本方法的关键在于通过Transformer交叉条件架构融合多视角输入,继而采用新颖的统一双路径预测策略:第一路径预测物体的结构与外观,第二路径通过图像背景或物体反射来预测环境光照。结合可微分蒙特卡洛多重重要性采样渲染器,这一方案构建出最优的光照解耦训练流程。此外,通过融合合成PBR数据集与真实世界RGB采集的混合域训练方案,我们在几何精度、材质准确性和光照质量方面实现了可泛化的成果。通过将原先独立的重建任务统一至单次前向传播,我们实现了近乎即时生成完整可重光照3D资源的能力。项目页面:https://reli3d.jdihlmann.com/
English
Reconstructing 3D assets from images has long required separate pipelines for geometry reconstruction, material estimation, and illumination recovery, each with distinct limitations and computational overhead. We present ReLi3D, the first unified end-to-end pipeline that simultaneously reconstructs complete 3D geometry, spatially-varying physically-based materials, and environment illumination from sparse multi-view images in under one second. Our key insight is that multi-view constraints can dramatically improve material and illumination disentanglement, a problem that remains fundamentally ill-posed for single-image methods. Key to our approach is the fusion of the multi-view input via a transformer cross-conditioning architecture, followed by a novel unified two-path prediction strategy. The first path predicts the object's structure and appearance, while the second path predicts the environment illumination from image background or object reflections. This, combined with a differentiable Monte Carlo multiple importance sampling renderer, creates an optimal illumination disentanglement training pipeline. In addition, with our mixed domain training protocol, which combines synthetic PBR datasets with real-world RGB captures, we establish generalizable results in geometry, material accuracy, and illumination quality. By unifying previously separate reconstruction tasks into a single feed-forward pass, we enable near-instantaneous generation of complete, relightable 3D assets. Project Page: https://reli3d.jdihlmann.com/