LOCATEdit:基于图拉普拉斯优化的跨注意力机制,实现精准文本引导的图像编辑
LOCATEdit: Graph Laplacian Optimized Cross Attention for Localized Text-Guided Image Editing
March 27, 2025
作者: Achint Soni, Meet Soni, Sirisha Rambhatla
cs.AI
摘要
文本引导的图像编辑旨在根据自然语言指令修改图像的特定区域,同时保持整体结构和背景的真实性。现有方法利用从扩散模型生成的交叉注意力图导出的掩码来识别需要修改的目标区域。然而,由于交叉注意力机制侧重于语义相关性,它们在保持图像完整性方面存在困难。因此,这些方法往往缺乏空间一致性,导致编辑伪影和失真。在本研究中,我们针对这些局限性提出了LOCATEdit,它通过基于图的方法增强交叉注意力图,利用自注意力导出的补丁关系来保持图像区域间平滑、连贯的注意力,确保修改仅限于指定对象,同时保留周围结构。该方法在PIE-Bench上始终显著优于现有基线,展示了其在各种编辑任务中的先进性能和有效性。代码可在https://github.com/LOCATEdit/LOCATEdit/ 获取。
English
Text-guided image editing aims to modify specific regions of an image
according to natural language instructions while maintaining the general
structure and the background fidelity. Existing methods utilize masks derived
from cross-attention maps generated from diffusion models to identify the
target regions for modification. However, since cross-attention mechanisms
focus on semantic relevance, they struggle to maintain the image integrity. As
a result, these methods often lack spatial consistency, leading to editing
artifacts and distortions. In this work, we address these limitations and
introduce LOCATEdit, which enhances cross-attention maps through a graph-based
approach utilizing self-attention-derived patch relationships to maintain
smooth, coherent attention across image regions, ensuring that alterations are
limited to the designated items while retaining the surrounding structure.
\method consistently and substantially outperforms existing baselines on
PIE-Bench, demonstrating its state-of-the-art performance and effectiveness on
various editing tasks. Code can be found on
https://github.com/LOCATEdit/LOCATEdit/Summary
AI-Generated Summary