NANO3D:一種無需訓練的高效三維編輯方法,無需遮罩
NANO3D: A Training-Free Approach for Efficient 3D Editing Without Masks
October 16, 2025
作者: Junliang Ye, Shenghao Xie, Ruowen Zhao, Zhengyi Wang, Hongyu Yan, Wenqiang Zu, Lei Ma, Jun Zhu
cs.AI
摘要
三維物體編輯對於遊戲、動畫及機器人領域中的互動內容創作至關重要,然而現有方法仍存在效率低下、結果不一致且往往無法保持未編輯區域完整性的問題。多數方法依賴於對多視角渲染圖進行編輯後再重建,此過程易引入偽影並限制其實用性。為應對這些挑戰,我們提出了Nano3D,這是一個無需訓練的框架,旨在實現無需遮罩的精確且連貫的三維物體編輯。Nano3D將FlowEdit整合至TRELLIS中,以前視圖渲染為指導進行局部編輯,並進一步引入了區域感知的融合策略——Voxel/Slat-Merge,該策略通過確保編輯與未編輯區域之間的一致性,自適應地保持結構保真度。實驗表明,與現有方法相比,Nano3D在實現三維一致性和視覺質量方面表現出眾。基於此框架,我們構建了首個大規模三維編輯數據集Nano3D-Edit-100k,其中包含超過10萬對高質量三維編輯樣本。此項工作解決了算法設計與數據可用性方面的長期挑戰,顯著提升了三維編輯的通用性與可靠性,並為前饋式三維編輯模型的發展奠定了基礎。項目頁面:https://jamesyjl.github.io/Nano3D
English
3D object editing is essential for interactive content creation in gaming,
animation, and robotics, yet current approaches remain inefficient,
inconsistent, and often fail to preserve unedited regions. Most methods rely on
editing multi-view renderings followed by reconstruction, which introduces
artifacts and limits practicality. To address these challenges, we propose
Nano3D, a training-free framework for precise and coherent 3D object editing
without masks. Nano3D integrates FlowEdit into TRELLIS to perform localized
edits guided by front-view renderings, and further introduces region-aware
merging strategies, Voxel/Slat-Merge, which adaptively preserve structural
fidelity by ensuring consistency between edited and unedited areas. Experiments
demonstrate that Nano3D achieves superior 3D consistency and visual quality
compared with existing methods. Based on this framework, we construct the first
large-scale 3D editing datasets Nano3D-Edit-100k, which contains over 100,000
high-quality 3D editing pairs. This work addresses long-standing challenges in
both algorithm design and data availability, significantly improving the
generality and reliability of 3D editing, and laying the groundwork for the
development of feed-forward 3D editing models. Project
Page:https://jamesyjl.github.io/Nano3D