AdvPrompter:用於LLM的快速適應性對抗提示
AdvPrompter: Fast Adaptive Adversarial Prompting for LLMs
April 21, 2024
作者: Anselm Paulus, Arman Zharmagambetov, Chuan Guo, Brandon Amos, Yuandong Tian
cs.AI
摘要
近來,大型語言模型(LLMs)取得了顯著的成功,但卻容易受到某些越獄攻擊的影響,導致生成不當或有害內容。手動紅隊測試需要尋找導致這種越獄的對抗提示,例如通過在給定指令後附加後綴,這種方法效率低且耗時。另一方面,自動對抗提示生成通常導致語義無意義的攻擊,容易被基於困惑度的過濾器檢測到,可能需要從目標LLM獲取梯度信息,或者由於在標記空間上耗時的離散優化過程而無法很好地擴展。在本文中,我們提出了一種新方法,使用另一個名為AdvPrompter的LLM,可以在幾秒鐘內生成人類可讀的對抗提示,比現有基於優化的方法快800倍。我們使用一種新算法訓練AdvPrompter,無需訪問目標LLM的梯度。該過程在兩個步驟之間交替進行:(1)通過優化AdvPrompter的預測生成高質量的目標對抗後綴,以及(2)使用生成的對抗後綴對AdvPrompter進行低秩微調。訓練後的AdvPrompter生成的後綴掩蓋了輸入指令而不改變其含義,使目標LLM誘使產生有害回應。對流行的開源目標LLMs進行的實驗結果顯示,在AdvBench數據集上取得了最先進的結果,並且這些結果也適用於封閉源黑盒LLM API。此外,我們展示通過在AdvPrompter生成的合成數據集上進行微調,LLMs可以在保持性能(即高MMLU分數)的同時更加堅固抵禦越獄攻擊。
English
While recently Large Language Models (LLMs) have achieved remarkable
successes, they are vulnerable to certain jailbreaking attacks that lead to
generation of inappropriate or harmful content. Manual red-teaming requires
finding adversarial prompts that cause such jailbreaking, e.g. by appending a
suffix to a given instruction, which is inefficient and time-consuming. On the
other hand, automatic adversarial prompt generation often leads to semantically
meaningless attacks that can easily be detected by perplexity-based filters,
may require gradient information from the TargetLLM, or do not scale well due
to time-consuming discrete optimization processes over the token space. In this
paper, we present a novel method that uses another LLM, called the AdvPrompter,
to generate human-readable adversarial prompts in seconds, sim800times
faster than existing optimization-based approaches. We train the AdvPrompter
using a novel algorithm that does not require access to the gradients of the
TargetLLM. This process alternates between two steps: (1) generating
high-quality target adversarial suffixes by optimizing the AdvPrompter
predictions, and (2) low-rank fine-tuning of the AdvPrompter with the generated
adversarial suffixes. The trained AdvPrompter generates suffixes that veil the
input instruction without changing its meaning, such that the TargetLLM is
lured to give a harmful response. Experimental results on popular open source
TargetLLMs show state-of-the-art results on the AdvBench dataset, that also
transfer to closed-source black-box LLM APIs. Further, we demonstrate that by
fine-tuning on a synthetic dataset generated by AdvPrompter, LLMs can be made
more robust against jailbreaking attacks while maintaining performance, i.e.
high MMLU scores.Summary
AI-Generated Summary