IAG:針對視覺定位任務的輸入感知型後門攻擊於視覺語言模型
IAG: Input-aware Backdoor Attack on VLMs for Visual Grounding
August 13, 2025
作者: Junxian Li, Beining Xu, Di Zhang
cs.AI
摘要
視覺語言模型(VLMs)在視覺定位等任務中取得了顯著進展,這些任務涉及根據自然語言查詢和圖像來定位特定對象。然而,VLMs在視覺定位任務中的安全性問題仍未得到充分探索,尤其是在後門攻擊的背景下。本文提出了一種新穎的輸入感知後門攻擊方法IAG,旨在操控VLMs的定位行為。該攻擊迫使模型在輸入圖像中定位特定目標對象,而無視用戶的查詢。我們提出了一種自適應觸發生成器,利用文本條件U-Net將攻擊目標描述的語義信息嵌入原始圖像,從而克服開放詞彙攻擊的挑戰。為了確保攻擊的隱蔽性,我們利用重建損失來最小化被污染圖像與乾淨圖像之間的視覺差異。此外,我們引入了一種統一的攻擊數據生成方法。IAG在理論和實證上均得到評估,證明了其可行性和有效性。值得注意的是,我們在InternVL-2.5-8B上的ASR@0.5在各種測試集上超過了65%。IAG在操控Ferret-7B和LlaVA-1.5-7B方面也顯示出良好的潛力,且對乾淨樣本的準確率下降極小。廣泛的具體實驗,如消融研究和潛在防禦,也表明了我們攻擊的魯棒性和可遷移性。
English
Vision-language models (VLMs) have shown significant advancements in tasks
such as visual grounding, where they localize specific objects in images based
on natural language queries and images. However, security issues in visual
grounding tasks for VLMs remain underexplored, especially in the context of
backdoor attacks. In this paper, we introduce a novel input-aware backdoor
attack method, IAG, designed to manipulate the grounding behavior of VLMs. This
attack forces the model to ground a specific target object in the input image,
regardless of the user's query. We propose an adaptive trigger generator that
embeds the semantic information of the attack target's description into the
original image using a text-conditional U-Net, thereby overcoming the
open-vocabulary attack challenge. To ensure the attack's stealthiness, we
utilize a reconstruction loss to minimize visual discrepancies between poisoned
and clean images. Additionally, we introduce a unified method for generating
attack data. IAG is evaluated theoretically and empirically, demonstrating its
feasibility and effectiveness. Notably, our ASR@0.5 on InternVL-2.5-8B reaches
over 65\% on various testing sets. IAG also shows promising potential on
manipulating Ferret-7B and LlaVA-1.5-7B with very little accuracy decrease on
clean samples. Extensive specific experiments, such as ablation study and
potential defense, also indicate the robustness and transferability of our
attack.