LASA:面向大语言模型安全性的语义瓶颈层语言无关语义对齐
LASA: Language-Agnostic Semantic Alignment at the Semantic Bottleneck for LLM Safety
April 13, 2026
作者: Junxiao Yang, Haoran Liu, Jinzhe Tu, Jiale Cheng, Zhexin Zhang, Shiyao Cui, Jiaqi Weng, Jialing Tao, Hui Xue, Hongning Wang, Han Qiu, Minlie Huang
cs.AI
摘要
大型语言模型(LLMs)在高资源语言中通常表现出强大的安全性能,但在低资源语言查询时却呈现严重漏洞。我们将此差异归因于语言无关的语义理解能力与偏向高资源语言的以语言为主导的安全对齐之间的不匹配。基于这一假设,我们通过实证研究识别出LLMs中的语义瓶颈——即模型表征的几何结构主要由共享语义内容而非语言身份主导的中间层。基于此发现,我们提出语言无关语义对齐(LASA)方法,将安全对齐直接锚定在语义瓶颈层。实验表明,LASA显著提升了所有语言的安全性:在LLaMA-3.1-8B-Instruct模型上,平均攻击成功率(ASR)从24.7%降至2.8%,而在Qwen2.5和Qwen3 Instruct系列模型(7B-32B)中始终保持在3-4%左右。我们的分析与方法共同为LLM安全提供了表征层面的新视角,表明安全对齐需要将安全理解锚定于模型的跨语言语义空间,而非表层文本。
English
Large language models (LLMs) often demonstrate strong safety performance in high-resource languages, yet exhibit severe vulnerabilities when queried in low-resource languages. We attribute this gap to a mismatch between language-agnostic semantic understanding ability and language-dominant safety alignment biased toward high-resource languages. Consistent with this hypothesis, we empirically identify the semantic bottleneck in LLMs, an intermediate layer in which the geometry of model representations is governed primarily by shared semantic content rather than language identity. Building on this observation, we propose Language-Agnostic Semantic Alignment (LASA), which anchors safety alignment directly in semantic bottlenecks. Experiments show that LASA substantially improves safety across all languages: average attack success rate (ASR) drops from 24.7% to 2.8% on LLaMA-3.1-8B-Instruct and remains around 3-4% across Qwen2.5 and Qwen3 Instruct models (7B-32B). Together, our analysis and method offer a representation-level perspective on LLM safety, suggesting that safety alignment requires anchoring safety understanding not in surface text, but in the model's language-agnostic semantic space.