安全遗忘:一种出乎意料地有效且具有普适性的解决方案,用于抵御越狱攻击。
Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks
July 3, 2024
作者: Zhexin Zhang, Junxiao Yang, Pei Ke, Shiyao Cui, Chujie Zheng, Hongning Wang, Minlie Huang
cs.AI
摘要
LLM被认为容易受到越狱攻击的影响,即使经过安全对齐。一个重要观察是,虽然不同类型的越狱攻击可能会产生明显不同的查询,但它们大多会导致根植于相同有害知识的类似响应(例如,制作炸弹的详细步骤)。因此,我们推测直接在LLM中消除有害知识可能是比基于主流监督微调(SFT)方法更有效地抵御越狱攻击的方式。我们的大量实验证实了我们的洞察,并表明我们基于消除有害知识的方法具有令人惊讶的泛化能力:仅使用20个原始有害问题,在训练过程中没有任何越狱提示,我们的解决方案将Vicuna-7B上分布外有害问题的攻击成功率(ASR)从82.6%降低到7.7%。这明显优于Llama2-7B-Chat,后者在约0.1M安全对齐样本上进行微调,即使在额外安全系统提示的帮助下,其ASR仍为21.9%。进一步分析揭示了我们解决方案的泛化能力源自有害问题之间有害响应的内在相关性(例如,响应模式、共享步骤和操作,以及它们在LLM中学习表示之间的相似性)。我们的代码可在https://github.com/thu-coai/SafeUnlearning找到。
English
LLMs are known to be vulnerable to jailbreak attacks, even after safety
alignment. An important observation is that, while different types of jailbreak
attacks can generate significantly different queries, they mostly result in
similar responses that are rooted in the same harmful knowledge (e.g., detailed
steps to make a bomb). Therefore, we conjecture that directly unlearn the
harmful knowledge in the LLM can be a more effective way to defend against
jailbreak attacks than the mainstream supervised fine-tuning (SFT) based
approaches. Our extensive experiments confirmed our insight and suggested
surprising generalizability of our unlearning-based approach: using only 20 raw
harmful questions without any jailbreak prompt during training, our
solution reduced the Attack Success Rate (ASR) in Vicuna-7B on
out-of-distribution (OOD) harmful questions wrapped with various complex
jailbreak prompts from 82.6\% to 7.7\%. This significantly outperforms
Llama2-7B-Chat, which is fine-tuned on about 0.1M safety alignment samples but
still has an ASR of 21.9\% even under the help of an additional safety system
prompt. Further analysis reveals that the generalization ability of our
solution stems from the intrinsic relatedness among harmful responses across
harmful questions (e.g., response patterns, shared steps and actions, and
similarity among their learned representations in the LLM). Our code is
available at https://github.com/thu-coai/SafeUnlearning.Summary
AI-Generated Summary