安全算術:通過操控參數和激活來實現語言模型測試時安全對齊的框架
Safety Arithmetic: A Framework for Test-time Safety Alignment of Language Models by Steering Parameters and Activations
June 17, 2024
作者: Rima Hazra, Sayan Layek, Somnath Banerjee, Soujanya Poria
cs.AI
摘要
確保大型語言模型(LLMs)與人類價值觀的安全對齊至關重要,因為它們已成為翻譯和問答等應用的重要組成部分。目前的對齊方法在應對動態用戶意圖和複雜目標方面存在困難,使模型容易生成有害內容。我們提出了一種名為「安全算術」的無需訓練框架,可增強LLM在不同情境下的安全性:基本模型、監督微調模型(SFT)和編輯模型。安全算術包括有害方向去除以避免有害內容,以及安全對齊以促進安全回應。此外,我們提出了一個名為NoIntentEdit的數據集,突顯了可能會危及模型安全性的編輯實例,如果不慎使用。我們的實驗表明,安全算術顯著改善了安全性指標,減少了過度安全性,並保持了模型的實用性,在確保生成安全內容方面優於現有方法。
English
Ensuring the safe alignment of large language models (LLMs) with human values
is critical as they become integral to applications like translation and
question answering. Current alignment methods struggle with dynamic user
intentions and complex objectives, making models vulnerable to generating
harmful content. We propose Safety Arithmetic, a training-free framework
enhancing LLM safety across different scenarios: Base models, Supervised
fine-tuned models (SFT), and Edited models. Safety Arithmetic involves Harm
Direction Removal to avoid harmful content and Safety Alignment to promote safe
responses. Additionally, we present NoIntentEdit, a dataset highlighting edit
instances that could compromise model safety if used unintentionally. Our
experiments show that Safety Arithmetic significantly improves safety measures,
reduces over-safety, and maintains model utility, outperforming existing
methods in ensuring safe content generation.Summary
AI-Generated Summary