UnMix-NeRF:光譜解混與神經輻射場的交匯
UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields
June 27, 2025
作者: Fabian Perez, Sara Rojas, Carlos Hinojosa, Hoover Rueda-Chacón, Bernard Ghanem
cs.AI
摘要
基於神經輻射場(NeRF)的分割方法專注於物體語義,並僅依賴RGB數據,缺乏內在的材料屬性。這一限制阻礙了精確的材料感知,而這對於機器人技術、增強現實、模擬及其他應用至關重要。我們提出了UnMix-NeRF,一個將光譜解混融入NeRF的框架,實現了聯合高光譜新視角合成與無監督材料分割。我們的方法通過漫反射和鏡面反射分量來建模光譜反射率,其中學習到的全局端元字典代表純材料特徵,而每點豐度則捕捉其分佈。對於材料分割,我們利用沿學習端元的光譜特徵預測,實現無監督材料聚類。此外,UnMix-NeRF通過修改學習到的端元字典,支持場景編輯,實現靈活的基於材料的外觀操控。大量實驗驗證了我們的方法,展示了在光譜重建和材料分割上相較現有方法的優越性。項目頁面:https://www.factral.co/UnMix-NeRF。
English
Neural Radiance Field (NeRF)-based segmentation methods focus on object
semantics and rely solely on RGB data, lacking intrinsic material properties.
This limitation restricts accurate material perception, which is crucial for
robotics, augmented reality, simulation, and other applications. We introduce
UnMix-NeRF, a framework that integrates spectral unmixing into NeRF, enabling
joint hyperspectral novel view synthesis and unsupervised material
segmentation. Our method models spectral reflectance via diffuse and specular
components, where a learned dictionary of global endmembers represents pure
material signatures, and per-point abundances capture their distribution. For
material segmentation, we use spectral signature predictions along learned
endmembers, allowing unsupervised material clustering. Additionally, UnMix-NeRF
enables scene editing by modifying learned endmember dictionaries for flexible
material-based appearance manipulation. Extensive experiments validate our
approach, demonstrating superior spectral reconstruction and material
segmentation to existing methods. Project page:
https://www.factral.co/UnMix-NeRF.