ChatPaper.aiChatPaper

SHAP-EDITOR:指導引導的幾秒內潛在3D編輯

SHAP-EDITOR: Instruction-guided Latent 3D Editing in Seconds

December 14, 2023
作者: Minghao Chen, Junyu Xie, Iro Laina, Andrea Vedaldi
cs.AI

摘要

我們提出了一種名為 Shap-Editor 的新型前饋 3D 編輯框架。 先前關於編輯 3D 物件的研究主要集中在通過利用現成的 2D 圖像編輯網絡來編輯單個物件。這是通過一個稱為蒸餾的過程實現的,該過程將知識從 2D 網絡轉移到 3D 資產。蒸餾需要每個資產至少幾十分鐘才能達到令人滿意的編輯結果,因此並不是非常實用。相比之下,我們探討了是否可以通過一個前饋網絡直接進行 3D 編輯,避免測試時間的優化。具體而言,我們假設通過首先在適當的潛在空間中對 3D 物件進行編碼,可以大大簡化編輯過程。我們通過構建在 Shap-E 的潛在空間之上來驗證這一假設。我們展示了在這個空間中進行直接 3D 編輯是可能且高效的,通過構建一個前饋編輯器網絡,每次編輯僅需要大約一秒的時間。我們的實驗表明,Shap-Editor 對於具有不同提示的分內資產和分外資產都具有良好的泛化能力,展現出與為每個編輯實例進行測試時間優化的方法相當的性能。
English
We propose a novel feed-forward 3D editing framework called Shap-Editor. Prior research on editing 3D objects primarily concentrated on editing individual objects by leveraging off-the-shelf 2D image editing networks. This is achieved via a process called distillation, which transfers knowledge from the 2D network to 3D assets. Distillation necessitates at least tens of minutes per asset to attain satisfactory editing results, and is thus not very practical. In contrast, we ask whether 3D editing can be carried out directly by a feed-forward network, eschewing test-time optimisation. In particular, we hypothesise that editing can be greatly simplified by first encoding 3D objects in a suitable latent space. We validate this hypothesis by building upon the latent space of Shap-E. We demonstrate that direct 3D editing in this space is possible and efficient by building a feed-forward editor network that only requires approximately one second per edit. Our experiments show that Shap-Editor generalises well to both in-distribution and out-of-distribution 3D assets with different prompts, exhibiting comparable performance with methods that carry out test-time optimisation for each edited instance.
PDF91December 15, 2024