每當感到不安全時請拒絕:通過解耦拒絕訓練來提高LLM的安全性
Refuse Whenever You Feel Unsafe: Improving Safety in LLMs via Decoupled Refusal Training
July 12, 2024
作者: Youliang Yuan, Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Jiahao Xu, Tian Liang, Pinjia He, Zhaopeng Tu
cs.AI
摘要
本研究針對大型語言模型(LLMs)的安全調整實踐中存在的一個關鍵缺陷進行了探討,該缺陷是識別和解決拒絕位置偏見,這在安全調整數據中,影響了模型拒絕生成不安全內容的能力。我們提出了一種新方法,名為Decoupled Refusal Training(DeRTa),旨在賦予LLMs在任何回應位置拒絕生成有害提示的能力,顯著增強其安全性能。DeRTa包含兩個新組件:(1)帶有有害回應前綴的最大概似估計(MLE),通過將有害回應的部分附加到安全回應的開頭,訓練模型識別並避免不安全內容;(2)強化轉換優化(RTO),使模型能夠在有害回應序列中始終一致地從潛在危害轉變為安全拒絕。我們的實證評估使用LLaMA3和Mistral模型系列在六種攻擊方案中進行,證明我們的方法不僅提高了模型的安全性而不影響性能,還超越了著名模型如GPT-4在抵禦攻擊方面的表現。重要的是,我們的方法成功抵禦了最新的高級攻擊方法(例如CodeAttack),這些方法已經破解了GPT-4和LLaMA3-70B-Instruct。我們的代碼和數據可在https://github.com/RobustNLP/DeRTa 找到。
English
This study addresses a critical gap in safety tuning practices for Large
Language Models (LLMs) by identifying and tackling a refusal position bias
within safety tuning data, which compromises the models' ability to
appropriately refuse generating unsafe content. We introduce a novel approach,
Decoupled Refusal Training (DeRTa), designed to empower LLMs to refuse
compliance to harmful prompts at any response position, significantly enhancing
their safety capabilities. DeRTa incorporates two novel components: (1) Maximum
Likelihood Estimation (MLE) with Harmful Response Prefix, which trains models
to recognize and avoid unsafe content by appending a segment of harmful
response to the beginning of a safe response, and (2) Reinforced Transition
Optimization (RTO), which equips models with the ability to transition from
potential harm to safety refusal consistently throughout the harmful response
sequence. Our empirical evaluation, conducted using LLaMA3 and Mistral model
families across six attack scenarios, demonstrates that our method not only
improves model safety without compromising performance but also surpasses
well-known models such as GPT-4 in defending against attacks. Importantly, our
approach successfully defends recent advanced attack methods (e.g., CodeAttack)
that have jailbroken GPT-4 and LLaMA3-70B-Instruct. Our code and data can be
found at https://github.com/RobustNLP/DeRTa.Summary
AI-Generated Summary