ChatPaper.aiChatPaper

AdvPrompter:用于LLM的快速自适应对抗提示

AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs

April 21, 2024
作者: Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian
cs.AI

摘要

尽管最近大型语言模型(LLMs)取得了显著的成功,但它们容易受到某些越狱攻击的影响,导致生成不当或有害内容。手动红队测试需要找到会导致此类越狱的对抗提示,例如通过在给定指令后附加后缀,这种方法效率低且耗时。另一方面,自动对抗提示生成往往会导致语义上无意义的攻击,容易被基于困惑度的过滤器检测到,可能需要来自TargetLLM的梯度信息,或由于耗时的离散优化过程而难以扩展。在本文中,我们提出了一种新方法,使用另一个名为AdvPrompter的LLM,能够在几秒钟内生成人类可读的对抗提示,比现有基于优化的方法快800倍。我们使用一种新算法训练AdvPrompter,无需访问TargetLLM的梯度。该过程交替进行两个步骤:(1)通过优化AdvPrompter的预测生成高质量的目标对抗后缀,和(2)使用生成的对抗后缀对AdvPrompter进行低秩微调。经过训练的AdvPrompter生成的后缀可以掩盖输入指令而不改变其含义,从而诱使TargetLLM给出有害响应。在流行的开源TargetLLMs上的实验结果显示,我们在AdvBench数据集上取得了最先进的结果,并且这些结果也适用于封闭式黑盒LLM API。此外,我们证明通过在AdvPrompter生成的合成数据集上进行微调,LLMs可以在保持性能的同时更加抵抗越狱攻击,即获得更高的MMLU分数。
English
While recently Large Language Models (LLMs) have achieved remarkable successes, they are vulnerable to certain jailbreaking attacks that lead to generation of inappropriate or harmful content. Manual red-teaming requires finding adversarial prompts that cause such jailbreaking, e.g. by appending a suffix to a given instruction, which is inefficient and time-consuming. On the other hand, automatic adversarial prompt generation often leads to semantically meaningless attacks that can easily be detected by perplexity-based filters, may require gradient information from the TargetLLM, or do not scale well due to time-consuming discrete optimization processes over the token space. In this paper, we present a novel method that uses another LLM, called the AdvPrompter, to generate human-readable adversarial prompts in seconds, sim800times faster than existing optimization-based approaches. We train the AdvPrompter using a novel algorithm that does not require access to the gradients of the TargetLLM. This process alternates between two steps: (1) generating high-quality target adversarial suffixes by optimizing the AdvPrompter predictions, and (2) low-rank fine-tuning of the AdvPrompter with the generated adversarial suffixes. The trained AdvPrompter generates suffixes that veil the input instruction without changing its meaning, such that the TargetLLM is lured to give a harmful response. Experimental results on popular open source TargetLLMs show state-of-the-art results on the AdvBench dataset, that also transfer to closed-source black-box LLM APIs. Further, we demonstrate that by fine-tuning on a synthetic dataset generated by AdvPrompter, LLMs can be made more robust against jailbreaking attacks while maintaining performance, i.e. high MMLU scores.

Summary

AI-Generated Summary

PDF301December 15, 2024