选择性调控:通过判别性层选择实现规范保持控制
Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection
January 27, 2026
作者: Quy-Anh Dang, Chris Ngo
cs.AI
摘要
尽管在模型对齐方面取得显著进展,大型语言模型(LLMs)仍易受引发有害行为的对抗性攻击。激活导向技术作为一种具有前景的推理时干预手段,现有方法却存在关键局限:激活加法需要精细的系数调整且对层间范数变化敏感,而定向消融仅能实现二元控制。近期提出的角度导向法通过二维子空间旋转实现连续控制,但其实际实施破坏了范数保持特性,导致分布偏移和生成崩溃,尤其在70亿参数以下的模型中尤为明显。我们提出选择性导向方法,通过两项关键创新解决上述问题:(1)采用数学严谨的范数保持旋转公式,维持激活分布完整性;(2)通过判别性层级选择,仅在特征表征呈现相反符号类别对齐的层级实施导向。在九个模型上的实验表明,选择性导向的攻击成功率较现有方法提升5.5倍,同时在标准基准测试中保持零困惑度异常与约100%的能力保留。该方法为可控且稳定的大模型行为修正提供了原理清晰、高效可行的框架。代码地址:https://github.com/knoveleng/steering
English
Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors. Activation steering techniques offer a promising inference-time intervention approach, but existing methods suffer from critical limitations: activation addition requires careful coefficient tuning and is sensitive to layer-specific norm variations, while directional ablation provides only binary control. Recent work on Angular Steering introduces continuous control via rotation in a 2D subspace, but its practical implementation violates norm preservation, causing distribution shift and generation collapse, particularly in models below 7B parameters. We propose Selective Steering, which addresses these limitations through two key innovations: (1) a mathematically rigorous norm-preserving rotation formulation that maintains activation distribution integrity, and (2) discriminative layer selection that applies steering only where feature representations exhibit opposite-signed class alignment. Experiments across nine models demonstrate that Selective Steering achieves 5.5x higher attack success rates than prior methods while maintaining zero perplexity violations and approximately 100\% capability retention on standard benchmarks. Our approach provides a principled, efficient framework for controllable and stable LLM behavior modification. Code: https://github.com/knoveleng/steering