ReplaceAnything3D:使用組合神經輻射場進行文本引導的3D場景編輯
ReplaceAnything3D:Text-Guided 3D Scene Editing with Compositional Neural Radiance Fields
January 31, 2024
作者: Edward Bartrum, Thu Nguyen-Phuoc, Chris Xie, Zhengqin Li, Numair Khan, Armen Avetisyan, Douglas Lanman, Lei Xiao
cs.AI
摘要
我們介紹了ReplaceAnything3D模型(RAM3D),這是一種新穎的文本引導的3D場景編輯方法,可以替換場景中的特定物件。給定場景的多視角圖像、描述要替換的物件的文本提示以及描述新物件的文本提示,我們的Erase-and-Replace方法可以有效地將場景中的物件與新生成的內容進行交換,同時在多個視角保持3D一致性。我們展示了ReplaceAnything3D的多功能性,將其應用於各種逼真的3D場景,展示了修改過的前景物件的結果,這些物件與場景的其餘部分融為一體,而不影響整體完整性。
English
We introduce ReplaceAnything3D model (RAM3D), a novel text-guided 3D scene
editing method that enables the replacement of specific objects within a scene.
Given multi-view images of a scene, a text prompt describing the object to
replace, and a text prompt describing the new object, our Erase-and-Replace
approach can effectively swap objects in the scene with newly generated content
while maintaining 3D consistency across multiple viewpoints. We demonstrate the
versatility of ReplaceAnything3D by applying it to various realistic 3D scenes,
showcasing results of modified foreground objects that are well-integrated with
the rest of the scene without affecting its overall integrity.