大型语言模型脑叶切除术:通过专家沉默实现专家混合模型的越狱
Large Language Lobotomy: Jailbreaking Mixture-of-Experts via Expert Silencing
February 9, 2026
作者: Jona te Lintelo, Lichao Wu, Stjepan Picek
cs.AI
摘要
混合专家(MoE)架构的快速普及标志着大语言模型(LLM)部署方式的重大转变。MoE LLM通过每个令牌仅激活少量参数来提升扩展效率,但其路由结构引入了新的安全攻击面。我们发现MoE LLM中的安全关键行为(如拒绝响应)集中分布于少量专家模型,而非均匀分散。基于此,我们提出大型语言模型脑叶切除术(L³)——一种无需训练、与架构无关的攻击方法,通过操控专家路由动态破坏安全对齐机制。L³通过识别与拒绝行为相关的路由模式,将安全行为归因于特定专家,并自适应地静默最具安全相关性的专家直至生成有害输出。我们在八个前沿开源MoE LLM上评估L³,结果显示自适应专家静默使平均攻击成功率从7.3%提升至70.4%,最高达86.3%,优于现有无需训练的MoE越狱方法。此外,绕过防护机制通常只需静默每层不足20%的专家,同时能基本保持通用语言能力。这些发现揭示了效率导向的MoE设计与鲁棒安全对齐之间的根本矛盾,为未来通过架构感知与路由感知方法在MoE LLM中更稳健地部署安全机制提供了理论依据。
English
The rapid adoption of Mixture-of-Experts (MoE) architectures marks a major shift in the deployment of Large Language Models (LLMs). MoE LLMs improve scaling efficiency by activating only a small subset of parameters per token, but their routing structure introduces new safety attack surfaces. We find that safety-critical behaviors in MoE LLMs (e.g., refusal) are concentrated in a small set of experts rather than being uniformly distributed. Building on this, we propose Large Language Lobotomy (L^3), a training-free, architecture-agnostic attack that compromises safety alignment by exploiting expert routing dynamics. L^3 learns routing patterns that correlate with refusal, attributes safety behavior to specific experts, and adaptively silences the most safety-relevant experts until harmful outputs are produced. We evaluate L^3 on eight state-of-the-art open-source MoE LLMs and show that our adaptive expert silencing increases average attack success from 7.3% to 70.4%, reaching up to 86.3%, outperforming prior training-free MoE jailbreak methods. Moreover, bypassing guardrails typically requires silencing fewer than 20% of layer-wise experts while largely preserving general language utility. These results reveal a fundamental tension between efficiency-driven MoE design and robust safety alignment and motivate distributing safety mechanisms more robustly in future MoE LLMs with architecture- and routing-aware methods.