大型语言模型脑叶切断术:通过专家静默实现专家混合模型的越狱
Large Language Lobotomy: Jailbreaking Mixture-of-Experts via Expert Silencing
February 9, 2026
作者: Jona te Lintelo, Lichao Wu, Stjepan Picek
cs.AI
摘要
混合专家模型(MoE)架构的迅速普及标志着大语言模型(LLM)部署方式的重大转变。MoE大语言模型通过仅激活每个令牌对应的少量参数来提升扩展效率,但其路由结构也引入了新的安全攻击面。我们发现,MoE大语言模型中涉及安全的关键行为(如拒绝响应)集中分布于少量专家模型,而非均匀分散。基于此,我们提出大型语言模型脑叶切除术(L³)——一种无需重新训练、与架构无关的攻击方法,通过利用专家路由动态特性来破坏安全对齐机制。L³能够识别与拒绝行为相关的路由模式,将安全行为归因于特定专家模型,并自适应地静默最具安全相关性的专家,直至模型生成有害输出。我们在八个顶尖开源MoE大语言模型上评估L³,结果表明自适应专家静默策略将平均攻击成功率从7.3%提升至70.4%,最高达86.3%,优于现有无需训练的MoE越狱方法。此外,绕过防护机制通常只需静默每层少于20%的专家模型,且能基本保持通用语言能力。这些结果揭示了效率导向的MoE设计与鲁棒安全对齐之间的根本矛盾,为未来通过架构感知和路由感知方法在MoE大语言模型中更稳健地分布安全机制提供了理论依据。
English
The rapid adoption of Mixture-of-Experts (MoE) architectures marks a major shift in the deployment of Large Language Models (LLMs). MoE LLMs improve scaling efficiency by activating only a small subset of parameters per token, but their routing structure introduces new safety attack surfaces. We find that safety-critical behaviors in MoE LLMs (e.g., refusal) are concentrated in a small set of experts rather than being uniformly distributed. Building on this, we propose Large Language Lobotomy (L^3), a training-free, architecture-agnostic attack that compromises safety alignment by exploiting expert routing dynamics. L^3 learns routing patterns that correlate with refusal, attributes safety behavior to specific experts, and adaptively silences the most safety-relevant experts until harmful outputs are produced. We evaluate L^3 on eight state-of-the-art open-source MoE LLMs and show that our adaptive expert silencing increases average attack success from 7.3% to 70.4%, reaching up to 86.3%, outperforming prior training-free MoE jailbreak methods. Moreover, bypassing guardrails typically requires silencing fewer than 20% of layer-wise experts while largely preserving general language utility. These results reveal a fundamental tension between efficiency-driven MoE design and robust safety alignment and motivate distributing safety mechanisms more robustly in future MoE LLMs with architecture- and routing-aware methods.