ChatPaper.aiChatPaper

以去脈絡化為防禦:擴散轉換器中的安全圖像編輯

DeContext as Defense: Safe Image Editing in Diffusion Transformers

December 18, 2025
作者: Linghui Shen, Mingyue Cui, Xingyi Yang
cs.AI

摘要

情境感知擴散模型讓使用者能夠以驚人的簡易度和真實感修改圖像。然而這種強大能力也引發嚴重的隱私疑慮:個人圖像可在未經所有者同意的情況下,輕易被用於身份冒充、散佈虛假信息或其他惡意用途。雖然先前研究曾探討透過輸入擾動來防範個人化文字生成圖像的濫用,但現代大規模基於DiT的情境感知模型之魯棒性仍鮮被檢視。本文提出DeContext——一種保護輸入圖像免遭未授權情境編輯的新方法。我們的核心洞見在於:源圖像的情境信息主要透過多模態注意力層傳播至輸出結果。透過注入微小且具針對性的擾動來削弱這些跨注意力路徑,DeContext能中斷此傳播流程,有效切斷輸入與輸出之間的連結。這種簡潔的防禦機制兼具效率與魯棒性。我們進一步證明,早期去噪步驟與特定轉換器區塊主導著情境傳播,使我們能將擾動集中於關鍵區域。在Flux Kontext和Step1X-Edit上的實驗表明,DeContext能持續阻擋非預期的圖像編輯,同時保持視覺品質。這些結果凸顯了基於注意力機制的擾動作為對抗圖像篡改的有效防禦手段。
English
In-context diffusion models allow users to modify images with remarkable ease and realism. However, the same power raises serious privacy concerns: personal images can be easily manipulated for identity impersonation, misinformation, or other malicious uses, all without the owner's consent. While prior work has explored input perturbations to protect against misuse in personalized text-to-image generation, the robustness of modern, large-scale in-context DiT-based models remains largely unexamined. In this paper, we propose DeContext, a new method to safeguard input images from unauthorized in-context editing. Our key insight is that contextual information from the source image propagates to the output primarily through multimodal attention layers. By injecting small, targeted perturbations that weaken these cross-attention pathways, DeContext breaks this flow, effectively decouples the link between input and output. This simple defense is both efficient and robust. We further show that early denoising steps and specific transformer blocks dominate context propagation, which allows us to concentrate perturbations where they matter most. Experiments on Flux Kontext and Step1X-Edit show that DeContext consistently blocks unwanted image edits while preserving visual quality. These results highlight the effectiveness of attention-based perturbations as a powerful defense against image manipulation.
PDF222December 20, 2025