ChatPaper.aiChatPaper

定制您的NeRF:通过局部-全局迭代训练实现自适应源驱动的3D场景编辑

Customize your NeRF: Adaptive Source Driven 3D Scene Editing via Local-Global Iterative Training

December 4, 2023
作者: Runze He, Shaofei Huang, Xuecheng Nie, Tianrui Hui, Luoqi Liu, Jiao Dai, Jizhong Han, Guanbin Li, Si Liu
cs.AI

摘要

本文针对自适应源驱动的3D场景编辑任务,提出了一种CustomNeRF模型,将文本描述或参考图像作为编辑提示统一起来。然而,获得符合编辑提示的期望编辑结果并不容易,因为存在两个重要挑战,包括仅准确编辑前景区域和在给定单视角参考图像的情况下实现多视角一致性。为了解决第一个挑战,我们提出了一种局部-全局迭代编辑(LGIE)训练方案,交替进行前景区域编辑和整体图像编辑,旨在仅对前景进行操作同时保留背景。针对第二个挑战,我们还设计了一种类别引导的正则化方法,利用生成模型内的类别先验来缓解图像驱动编辑中不同视角之间的不一致性问题。大量实验证明,我们的CustomNeRF在各种真实场景下,无论是文本驱动还是图像驱动设置,都能产生精确的编辑结果。
English
In this paper, we target the adaptive source driven 3D scene editing task by proposing a CustomNeRF model that unifies a text description or a reference image as the editing prompt. However, obtaining desired editing results conformed with the editing prompt is nontrivial since there exist two significant challenges, including accurate editing of only foreground regions and multi-view consistency given a single-view reference image. To tackle the first challenge, we propose a Local-Global Iterative Editing (LGIE) training scheme that alternates between foreground region editing and full-image editing, aimed at foreground-only manipulation while preserving the background. For the second challenge, we also design a class-guided regularization that exploits class priors within the generation model to alleviate the inconsistency problem among different views in image-driven editing. Extensive experiments show that our CustomNeRF produces precise editing results under various real scenes for both text- and image-driven settings.
PDF61December 15, 2024