Seal-3D:用於神經輻射場的互動式像素級編輯
Seal-3D: Interactive Pixel-Level Editing for Neural Radiance Fields
July 27, 2023
作者: Xiangyu Wang, Jingsen Zhu, Qi Ye, Yuchi Huo, Yunlong Ran, Zhihua Zhong, Jiming Chen
cs.AI
摘要
隨著隱式神經表示法或神經輻射場(NeRF)的普及,迫切需要編輯方法來與隱式3D模型進行互動,以進行後處理重建場景和3D內容創作等任務。雖然先前的研究從不同角度探索了NeRF的編輯,但在編輯靈活性、質量和速度方面存在限制,無法提供直接的編輯響應和即時預覽。關鍵挑戰在於構想一種可在本地進行編輯的神經表示法,能夠直接反映編輯指令並立即更新。為彌合這一差距,我們提出了一種新的互動式編輯方法和系統,稱為Seal-3D,允許用戶以像素級和自由的方式編輯NeRF模型,具有廣泛的NeRF-like骨幹,並即時預覽編輯效果。為實現這些效果,我們提出的代理函數將編輯指令映射到NeRF模型的原始空間,並採用師生訓練策略進行局部預訓練和全局微調以應對挑戰。建立了一個NeRF編輯系統來展示各種編輯類型。我們的系統可以在約1秒的互動速度下實現引人入勝的編輯效果。
English
With the popularity of implicit neural representations, or neural radiance
fields (NeRF), there is a pressing need for editing methods to interact with
the implicit 3D models for tasks like post-processing reconstructed scenes and
3D content creation. While previous works have explored NeRF editing from
various perspectives, they are restricted in editing flexibility, quality, and
speed, failing to offer direct editing response and instant preview. The key
challenge is to conceive a locally editable neural representation that can
directly reflect the editing instructions and update instantly. To bridge the
gap, we propose a new interactive editing method and system for implicit
representations, called Seal-3D, which allows users to edit NeRF models in a
pixel-level and free manner with a wide range of NeRF-like backbone and preview
the editing effects instantly. To achieve the effects, the challenges are
addressed by our proposed proxy function mapping the editing instructions to
the original space of NeRF models and a teacher-student training strategy with
local pretraining and global finetuning. A NeRF editing system is built to
showcase various editing types. Our system can achieve compelling editing
effects with an interactive speed of about 1 second.