ChatPaper.aiChatPaper

针对闭源多模态大语言模型的特征最优对齐对抗攻击

Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment

May 27, 2025
作者: Xiaojun Jia, Sensen Gao, Simeng Qin, Tianyu Pang, Chao Du, Yihao Huang, Xinfeng Li, Yiming Li, Bo Li, Yang Liu
cs.AI

摘要

多模态大语言模型(MLLMs)仍易受可迁移对抗样本的影响。现有方法通常通过对齐对抗样本与目标样本的全局特征(如CLIP的[CLS]标记)来实现定向攻击,但往往忽视了嵌入在局部标记中的丰富信息。这导致对齐效果欠佳,迁移能力受限,尤其对于闭源模型。为解决这一局限,我们提出了一种基于特征最优对齐的定向可迁移对抗攻击方法,称为FOA-Attack,以提升对抗迁移能力。具体而言,在全局层面,我们引入基于余弦相似度的全局特征损失,以对齐对抗样本与目标样本的粗粒度特征。在局部层面,鉴于Transformer内部丰富的局部表示,我们利用聚类技术提取紧凑的局部模式,以缓解冗余的局部特征。随后,我们将对抗样本与目标样本间的局部特征对齐问题建模为最优传输(OT)问题,并提出局部聚类最优传输损失,以优化细粒度特征对齐。此外,我们提出了一种动态集成模型权重策略,在对抗样本生成过程中自适应地平衡多个模型的影响,从而进一步提升迁移能力。跨多种模型的广泛实验验证了所提方法的优越性,尤其在迁移至闭源MLLMs时,超越了现有最先进方法。代码已发布于https://github.com/jiaxiaojunQAQ/FOA-Attack。
English
Multimodal large language models (MLLMs) remain vulnerable to transferable adversarial examples. While existing methods typically achieve targeted attacks by aligning global features-such as CLIP's [CLS] token-between adversarial and target samples, they often overlook the rich local information encoded in patch tokens. This leads to suboptimal alignment and limited transferability, particularly for closed-source models. To address this limitation, we propose a targeted transferable adversarial attack method based on feature optimal alignment, called FOA-Attack, to improve adversarial transfer capability. Specifically, at the global level, we introduce a global feature loss based on cosine similarity to align the coarse-grained features of adversarial samples with those of target samples. At the local level, given the rich local representations within Transformers, we leverage clustering techniques to extract compact local patterns to alleviate redundant local features. We then formulate local feature alignment between adversarial and target samples as an optimal transport (OT) problem and propose a local clustering optimal transport loss to refine fine-grained feature alignment. Additionally, we propose a dynamic ensemble model weighting strategy to adaptively balance the influence of multiple models during adversarial example generation, thereby further improving transferability. Extensive experiments across various models demonstrate the superiority of the proposed method, outperforming state-of-the-art methods, especially in transferring to closed-source MLLMs. The code is released at https://github.com/jiaxiaojunQAQ/FOA-Attack.

Summary

AI-Generated Summary

PDF82May 28, 2025