ChatPaper.aiChatPaper

攻击者后发制人:更强大的自适应攻击突破LLM越狱与提示注入防御

The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections

October 10, 2025
作者: Milad Nasr, Nicholas Carlini, Chawin Sitawarin, Sander V. Schulhoff, Jamie Hayes, Michael Ilie, Juliette Pluto, Shuang Song, Harsh Chaudhari, Ilia Shumailov, Abhradeep Thakurta, Kai Yuanqing Xiao, Andreas Terzis, Florian Tramèr
cs.AI

摘要

我们应如何评估语言模型防御机制的鲁棒性?当前针对越狱攻击和提示注入的防御措施(分别旨在防止攻击者获取有害知识或远程触发恶意行为),通常要么基于一组静态的有害攻击字符串进行测试,要么针对未考虑防御机制设计的计算能力较弱的优化方法进行评估。我们认为,这种评估方式存在缺陷。 相反,我们应当评估防御机制在面对适应性攻击者时的表现,这些攻击者会明确调整其攻击策略以对抗防御设计,并投入大量资源优化其攻击目标。通过系统性地调整和扩展通用优化技术——梯度下降、强化学习、随机搜索以及人类引导的探索——我们成功绕过了12种基于多种技术的最新防御机制,对大多数防御的攻击成功率超过90%;尤为关键的是,这些防御机制最初报告的攻击成功率近乎为零。我们坚信,未来的防御研究工作必须考虑更强大的攻击,如我们所描述的这些,才能做出可靠且令人信服的鲁棒性声明。
English
How should we evaluate the robustness of language model defenses? Current defenses against jailbreaks and prompt injections (which aim to prevent an attacker from eliciting harmful knowledge or remotely triggering malicious actions, respectively) are typically evaluated either against a static set of harmful attack strings, or against computationally weak optimization methods that were not designed with the defense in mind. We argue that this evaluation process is flawed. Instead, we should evaluate defenses against adaptive attackers who explicitly modify their attack strategy to counter a defense's design while spending considerable resources to optimize their objective. By systematically tuning and scaling general optimization techniques-gradient descent, reinforcement learning, random search, and human-guided exploration-we bypass 12 recent defenses (based on a diverse set of techniques) with attack success rate above 90% for most; importantly, the majority of defenses originally reported near-zero attack success rates. We believe that future defense work must consider stronger attacks, such as the ones we describe, in order to make reliable and convincing claims of robustness.
PDF82October 14, 2025