ChatPaper.aiChatPaper

混和式神經輻射場:現有神經輻射場中的零樣本物體生成和混和

Blended-NeRF: Zero-Shot Object Generation and Blending in Existing Neural Radiance Fields

June 22, 2023
作者: Ori Gordon, Omri Avrahami, Dani Lischinski
cs.AI

摘要

在以 NeRF 表示的 3D 場景中編輯本地區域或特定物件具有挑戰性,主要是因為場景表示的隱式特性。將一個新逼真的物件無縫融入場景中會增加額外的困難。我們提出了 Blended-NeRF,這是一個堅固且靈活的框架,用於根據文本提示或圖像補丁以及 3D ROI 方塊,編輯現有 NeRF 場景中的特定感興趣區域。我們的方法利用預訓練的語言-圖像模型來引導合成,以便朝向用戶提供的文本提示或圖像補丁,並使用在現有 NeRF 場景上初始化的 3D MLP 模型生成物件並將其融入原始場景中的指定區域。我們通過在輸入場景中定位 3D ROI 方塊來允許本地編輯,並使用一種新穎的體積混合技術將 ROI 內合成的內容與現有場景無縫混合。為了獲得自然且視角一致的結果,我們利用現有和新的幾何先驗和 3D 增強來提高最終結果的視覺保真度。 我們在各種真實 3D 場景和文本提示上定性和定量地測試我們的框架,展示與基線相比具有更大靈活性和多樣性的逼真多視角一致結果。最後,我們展示了我們的框架在幾個 3D 編輯應用中的應用性,包括將新物件添加到場景中、刪除/替換/修改現有物件以及紋理轉換。
English
Editing a local region or a specific object in a 3D scene represented by a NeRF is challenging, mainly due to the implicit nature of the scene representation. Consistently blending a new realistic object into the scene adds an additional level of difficulty. We present Blended-NeRF, a robust and flexible framework for editing a specific region of interest in an existing NeRF scene, based on text prompts or image patches, along with a 3D ROI box. Our method leverages a pretrained language-image model to steer the synthesis towards a user-provided text prompt or image patch, along with a 3D MLP model initialized on an existing NeRF scene to generate the object and blend it into a specified region in the original scene. We allow local editing by localizing a 3D ROI box in the input scene, and seamlessly blend the content synthesized inside the ROI with the existing scene using a novel volumetric blending technique. To obtain natural looking and view-consistent results, we leverage existing and new geometric priors and 3D augmentations for improving the visual fidelity of the final result. We test our framework both qualitatively and quantitatively on a variety of real 3D scenes and text prompts, demonstrating realistic multi-view consistent results with much flexibility and diversity compared to the baselines. Finally, we show the applicability of our framework for several 3D editing applications, including adding new objects to a scene, removing/replacing/altering existing objects, and texture conversion.
PDF80December 15, 2024