ChatPaper.aiChatPaper

LOCATEdit:基於圖拉普拉斯優化的跨注意力機制,實現精準定位的文本引導圖像編輯

LOCATEdit: Graph Laplacian Optimized Cross Attention for Localized Text-Guided Image Editing

March 27, 2025
作者: Achint Soni, Meet Soni, Sirisha Rambhatla
cs.AI

摘要

文本引導的圖像編輯旨在根據自然語言指令修改圖像的特定區域,同時保持整體結構和背景的真實性。現有方法利用從擴散模型生成的交叉注意力圖中導出的遮罩來識別需要修改的目標區域。然而,由於交叉注意力機制專注於語義相關性,它們難以維持圖像的完整性。因此,這些方法往往缺乏空間一致性,導致編輯出現偽影和失真。在本研究中,我們針對這些局限性提出了LOCATEdit,它通過基於圖的方法增強交叉注意力圖,利用自注意力導出的補丁關係來維持圖像區域間平滑、連貫的注意力,確保修改僅限於指定項目,同時保留周圍結構。\method在PIE-Bench上持續且顯著地超越了現有基準,展示了其在各種編輯任務中的最先進性能和有效性。代碼可在https://github.com/LOCATEdit/LOCATEdit/找到。
English
Text-guided image editing aims to modify specific regions of an image according to natural language instructions while maintaining the general structure and the background fidelity. Existing methods utilize masks derived from cross-attention maps generated from diffusion models to identify the target regions for modification. However, since cross-attention mechanisms focus on semantic relevance, they struggle to maintain the image integrity. As a result, these methods often lack spatial consistency, leading to editing artifacts and distortions. In this work, we address these limitations and introduce LOCATEdit, which enhances cross-attention maps through a graph-based approach utilizing self-attention-derived patch relationships to maintain smooth, coherent attention across image regions, ensuring that alterations are limited to the designated items while retaining the surrounding structure. \method consistently and substantially outperforms existing baselines on PIE-Bench, demonstrating its state-of-the-art performance and effectiveness on various editing tasks. Code can be found on https://github.com/LOCATEdit/LOCATEdit/

Summary

AI-Generated Summary

PDF12March 28, 2025