LightSwitch:基于材质引导扩散的多视角重光照技术
LightSwitch: Multi-view Relighting with Material-guided Diffusion
August 8, 2025
作者: Yehonathan Litman, Fernando De la Torre, Shubham Tulsiani
cs.AI
摘要
近期在三维重光照领域的研究展现出将二维图像重光照生成先验知识融入三维表现中的潜力,能够在保持底层结构不变的同时改变外观。然而,直接基于输入图像进行重光照的生成先验方法未能充分利用可推断的主体内在属性,也无法大规模考虑多视角数据,导致重光照效果不尽如人意。本文提出Lightswitch,一种新颖的微调材质重光照扩散框架,它能够高效地将任意数量的输入图像重光照至目标光照条件,同时整合了从推断内在属性中获得的线索。通过结合多视角与材质信息提示以及可扩展的去噪方案,我们的方法能够一致且高效地对具有多样材质构成物体的密集多视角数据进行重光照。实验表明,我们的二维重光照预测质量超越了以往直接从图像进行重光照的先进先验方法。此外,LightSwitch在仅需2分钟的情况下,即可在合成与真实物体的重光照任务中达到或超越当前最先进的扩散逆渲染方法的表现。
English
Recent approaches for 3D relighting have shown promise in integrating 2D
image relighting generative priors to alter the appearance of a 3D
representation while preserving the underlying structure. Nevertheless,
generative priors used for 2D relighting that directly relight from an input
image do not take advantage of intrinsic properties of the subject that can be
inferred or cannot consider multi-view data at scale, leading to subpar
relighting. In this paper, we propose Lightswitch, a novel finetuned
material-relighting diffusion framework that efficiently relights an arbitrary
number of input images to a target lighting condition while incorporating cues
from inferred intrinsic properties. By using multi-view and material
information cues together with a scalable denoising scheme, our method
consistently and efficiently relights dense multi-view data of objects with
diverse material compositions. We show that our 2D relighting prediction
quality exceeds previous state-of-the-art relighting priors that directly
relight from images. We further demonstrate that LightSwitch matches or
outperforms state-of-the-art diffusion inverse rendering methods in relighting
synthetic and real objects in as little as 2 minutes.