幾何可編輯且外觀保持的物體合成
Geometry-Editable and Appearance-Preserving Object Compositon
May 27, 2025
作者: Jianman Lin, Haojie Li, Chunmei Qing, Zhijing Yang, Liang Lin, Tianshui Chen
cs.AI
摘要
通用物體合成(GOC)旨在將目標物體無縫地融入背景場景中,同時保持其精細的外觀細節,並滿足所需的幾何特性。近期的方法通過提取語義嵌入並將其整合到先進的擴散模型中,以實現幾何可編輯的生成。然而,這些高度壓縮的嵌入僅編碼了高層次的語義信息,不可避免地丟失了精細的外觀細節。我們提出了一種解耦的幾何可編輯與外觀保持擴散模型(DGAD),該模型首先利用語義嵌入隱式捕捉所需的幾何變換,然後採用交叉注意力檢索機制將精細的外觀特徵與幾何編輯後的表示對齊,從而實現精確的幾何編輯和忠實的外觀保持。具體而言,DGAD基於CLIP/DINO衍生的參考網絡提取語義嵌入和外觀保持表示,並以解耦的方式將其無縫整合到編碼和解碼流程中。我們首先將語義嵌入整合到預訓練的擴散模型中,這些模型具有強大的空間推理能力,能夠隱式捕捉物體幾何,從而實現靈活的物體操作並確保有效的可編輯性。然後,我們設計了一種密集的交叉注意力機制,利用隱式學習的物體幾何來檢索並將外觀特徵與其對應區域進行空間對齊,確保外觀的一致性。在公開基準上的大量實驗證明了所提出的DGAD框架的有效性。
English
General object composition (GOC) aims to seamlessly integrate a target object
into a background scene with desired geometric properties, while simultaneously
preserving its fine-grained appearance details. Recent approaches derive
semantic embeddings and integrate them into advanced diffusion models to enable
geometry-editable generation. However, these highly compact embeddings encode
only high-level semantic cues and inevitably discard fine-grained appearance
details. We introduce a Disentangled Geometry-editable and
Appearance-preserving Diffusion (DGAD) model that first leverages semantic
embeddings to implicitly capture the desired geometric transformations and then
employs a cross-attention retrieval mechanism to align fine-grained appearance
features with the geometry-edited representation, facilitating both precise
geometry editing and faithful appearance preservation in object composition.
Specifically, DGAD builds on CLIP/DINO-derived and reference networks to
extract semantic embeddings and appearance-preserving representations, which
are then seamlessly integrated into the encoding and decoding pipelines in a
disentangled manner. We first integrate the semantic embeddings into
pre-trained diffusion models that exhibit strong spatial reasoning capabilities
to implicitly capture object geometry, thereby facilitating flexible object
manipulation and ensuring effective editability. Then, we design a dense
cross-attention mechanism that leverages the implicitly learned object geometry
to retrieve and spatially align appearance features with their corresponding
regions, ensuring faithful appearance consistency. Extensive experiments on
public benchmarks demonstrate the effectiveness of the proposed DGAD framework.