NeST:面向大语言模型安全性的神经元选择性调谐
NeST: Neuron Selective Tuning for LLM Safety
February 18, 2026
作者: Sasha Behrouzi, Lichao Wu, Mohamadreza Rostami, Ahmad-Reza Sadeghi
cs.AI
摘要
安全对齐对于负责任地部署大语言模型至关重要。然而,现有方法通常依赖计算成本高昂的微调技术,导致跨模型族更新、审计和维护困难。全参数微调会产生巨大的计算和存储开销,而LoRA等参数高效方法虽提升效率,却存在安全增益不稳定和对设计选择敏感的问题。电路阻断器等安全干预机制虽能减少不安全输出而不修改模型权重,但无法直接塑造或维护控制安全行为的内部表征。这些限制阻碍了快速可靠的安全更新,尤其在模型频繁迭代或需适应新政策领域的场景中。
我们提出NeST——一种轻量级、结构感知的安全对齐框架,通过选择性适配少量安全相关神经元并冻结模型其余部分,强化拒绝行为。NeST通过聚类功能一致的安全神经元并在簇内实施共享更新,使参数调整与安全行为内部结构对齐,实现精准稳定的安全适配,无需大规模模型修改或推理时开销。我们在涵盖多模型族和规模的10个开源大模型上,将NeST与全参数微调、基于LoRA的微调和电路阻断器三大主流基线进行对比。在所有评估模型中,NeST将攻击成功率从平均44.5%降至4.36%,相当于不安全生成减少90.2%,而平均仅需44万可训练参数。相较于全参数微调,更新参数量减少17,310倍;相比LoRA减少9.25倍,同时持续实现更强的安全对齐性能。
English
Safety alignment is essential for the responsible deployment of large language models (LLMs). Yet, existing approaches often rely on heavyweight fine-tuning that is costly to update, audit, and maintain across model families. Full fine-tuning incurs substantial computational and storage overhead, while parameter-efficient methods such as LoRA trade efficiency for inconsistent safety gains and sensitivity to design choices. Safety intervention mechanisms such as circuit breakers reduce unsafe outputs without modifying model weights, but do not directly shape or preserve the internal representations that govern safety behavior. These limitations hinder rapid and reliable safety updates, particularly in settings where models evolve frequently or must adapt to new policies and domains.
We present NeST, a lightweight, structure-aware safety alignment framework that strengthens refusal behavior by selectively adapting a small subset of safety-relevant neurons while freezing the remainder of the model. NeST aligns parameter updates with the internal organization of safety behavior by clustering functionally coherent safety neurons and enforcing shared updates within each cluster, enabling targeted and stable safety adaptation without broad model modification or inference-time overhead. We benchmark NeST against three dominant baselines: full fine-tuning, LoRA-based fine-tuning, and circuit breakers across 10 open-weight LLMs spanning multiple model families and sizes. Across all evaluated models, NeST reduces the attack success rate from an average of 44.5% to 4.36%, corresponding to a 90.2% reduction in unsafe generations, while requiring only 0.44 million trainable parameters on average. This amounts to a 17,310x decrease in updated parameters compared to full fine-tuning and a 9.25x reduction relative to LoRA, while consistently achieving stronger safety performance for alignment.