ChatPaper.aiChatPaper

从内部保障大语言模型安全:基于内部表征的有害内容检测

LLM Safety From Within: Detecting Harmful Content with Internal Representations

April 20, 2026
作者: Difan Jiao, Yilun Liu, Ye Yuan, Zhenwei Tang, Linfeng Du, Haolun Wu, Ashton Anderson
cs.AI

摘要

防护模型被广泛应用于检测用户提示和大型语言模型(LLM)响应中的有害内容。然而,现有最先进的防护模型仅依赖终端层表征,忽视了分布于内部各层的丰富安全相关特征。我们提出SIREN——一种利用这些内部特征的轻量级防护模型。通过线性探测识别安全神经元,并采用自适应层加权策略进行特征融合,SIREN无需修改底层模型即可基于LLM内部状态构建有害性检测器。综合评估表明,SIREN在多项基准测试中显著优于当前最先进的开源防护模型,同时可训练参数量减少250倍。此外,SIREN对未见过的基准测试展现出卓越的泛化能力,天然支持实时流式检测,与生成式防护模型相比显著提升推理效率。总体而言,我们的研究结果凸显了LLM内部状态作为实用高效有害性检测基础的巨大潜力。
English
Guard models are widely used to detect harmful content in user prompts and LLM responses. However, state-of-the-art guard models rely solely on terminal-layer representations and overlook the rich safety-relevant features distributed across internal layers. We present SIREN, a lightweight guard model that harnesses these internal features. By identifying safety neurons via linear probing and combining them through an adaptive layer-weighted strategy, SIREN builds a harmfulness detector from LLM internals without modifying the underlying model. Our comprehensive evaluation shows that SIREN substantially outperforms state-of-the-art open-source guard models across multiple benchmarks while using 250 times fewer trainable parameters. Moreover, SIREN exhibits superior generalization to unseen benchmarks, naturally enables real-time streaming detection, and significantly improves inference efficiency compared to generative guard models. Overall, our results highlight LLM internal states as a promising foundation for practical, high-performance harmfulness detection.
PDF211April 28, 2026