OmniPaint:通過解耦的插入-移除修復技術掌握面向物件的編輯
OmniPaint: Mastering Object-Oriented Editing via Disentangled Insertion-Removal Inpainting
March 11, 2025
作者: Yongsheng Yu, Ziyun Zeng, Haitian Zheng, Jiebo Luo
cs.AI
摘要
基於擴散的生成模型已革新了面向對象的圖像編輯領域,然而其在真實物體移除與插入中的應用仍受制於物理效應的複雜交互及配對訓練數據不足等挑戰。本研究提出OmniPaint,一個將物體移除與插入重新定義為相互依賴過程而非孤立任務的統一框架。通過利用預訓練的擴散先驗,結合包含初始配對樣本優化及後續大規模非配對CycleFlow精煉的漸進式訓練管道,OmniPaint實現了精確的前景消除與無縫的物體插入,同時忠實地保留了場景幾何與內在屬性。此外,我們新穎的CFD度量標準為上下文一致性與物體幻覺提供了無參考的穩健評估,為高保真圖像編輯設立了新基準。項目頁面:https://yeates.github.io/OmniPaint-Page/
English
Diffusion-based generative models have revolutionized object-oriented image
editing, yet their deployment in realistic object removal and insertion remains
hampered by challenges such as the intricate interplay of physical effects and
insufficient paired training data. In this work, we introduce OmniPaint, a
unified framework that re-conceptualizes object removal and insertion as
interdependent processes rather than isolated tasks. Leveraging a pre-trained
diffusion prior along with a progressive training pipeline comprising initial
paired sample optimization and subsequent large-scale unpaired refinement via
CycleFlow, OmniPaint achieves precise foreground elimination and seamless
object insertion while faithfully preserving scene geometry and intrinsic
properties. Furthermore, our novel CFD metric offers a robust, reference-free
evaluation of context consistency and object hallucination, establishing a new
benchmark for high-fidelity image editing. Project page:
https://yeates.github.io/OmniPaint-Page/Summary
AI-Generated Summary