ChatPaper.aiChatPaper

去上下文作为防御:扩散变换器中的安全图像编辑

DeContext as Defense: Safe Image Editing in Diffusion Transformers

December 18, 2025
作者: Linghui Shen, Mingyue Cui, Xingyi Yang
cs.AI

摘要

上下文扩散模型使用户能够以惊人的便捷性和真实感修改图像。然而这种强大能力也引发了严重的隐私担忧:个人图像可能被轻易用于身份冒充、虚假信息传播或其他恶意用途,且均未经所有者同意。虽然已有研究探索通过输入扰动来防范个性化文生图模型的滥用,但现代大规模基于上下文DiT模型的鲁棒性仍未得到充分检验。本文提出DeContext方法,通过保护输入图像免受未经授权的上下文编辑。我们的核心发现是:源图像的上下文信息主要通过多模态注意力层传播至输出。通过注入微小定向扰动来削弱这些交叉注意力路径,DeContext能有效切断输入与输出之间的关联。这种简易防御机制兼具高效性与鲁棒性。我们进一步证明早期去噪步骤和特定Transformer模块主导着上下文传播,这使得我们能将扰动集中在关键区域。在Flux Kontext和Step1X-Edit数据集上的实验表明,DeContext能持续阻断非授权图像编辑,同时保持视觉质量。这些结果凸显了基于注意力机制的扰动作为图像操纵防御手段的有效性。
English
In-context diffusion models allow users to modify images with remarkable ease and realism. However, the same power raises serious privacy concerns: personal images can be easily manipulated for identity impersonation, misinformation, or other malicious uses, all without the owner's consent. While prior work has explored input perturbations to protect against misuse in personalized text-to-image generation, the robustness of modern, large-scale in-context DiT-based models remains largely unexamined. In this paper, we propose DeContext, a new method to safeguard input images from unauthorized in-context editing. Our key insight is that contextual information from the source image propagates to the output primarily through multimodal attention layers. By injecting small, targeted perturbations that weaken these cross-attention pathways, DeContext breaks this flow, effectively decouples the link between input and output. This simple defense is both efficient and robust. We further show that early denoising steps and specific transformer blocks dominate context propagation, which allows us to concentrate perturbations where they matter most. Experiments on Flux Kontext and Step1X-Edit show that DeContext consistently blocks unwanted image edits while preserving visual quality. These results highlight the effectiveness of attention-based perturbations as a powerful defense against image manipulation.
PDF222December 20, 2025