语言模型中线性真值编码的涌现
Emergence of Linear Truth Encodings in Language Models
October 17, 2025
作者: Shauli Ravfogel, Gilad Yehudai, Tal Linzen, Joan Bruna, Alberto Bietti
cs.AI
摘要
近期探测研究表明,大型语言模型呈现出能区分真假陈述的线性子空间,但其形成机制尚不明确。我们引入一个透明的单层Transformer玩具模型,端到端复现此类真值子空间,并揭示其形成的具体路径。我们研究了一种真值编码可能出现的简单场景:在事实陈述与其它事实陈述共现(错误陈述同理)的数据分布下,模型为降低未来词元的语言建模损失而学习这种区分机制。我们通过预训练语言模型的实验验证了这一模式。最后在玩具场景中,我们观察到双阶段学习动态:网络首先通过少量步骤记忆个体事实关联,随后在更长时间跨度内学习线性区分真伪,从而降低语言建模损失。这些结果共同从机制层面和实证角度揭示了线性真值表征在语言模型中形成的方式与动因。
English
Recent probing studies reveal that large language models exhibit linear
subspaces that separate true from false statements, yet the mechanism behind
their emergence is unclear. We introduce a transparent, one-layer transformer
toy model that reproduces such truth subspaces end-to-end and exposes one
concrete route by which they can arise. We study one simple setting in which
truth encoding can emerge: a data distribution where factual statements
co-occur with other factual statements (and vice-versa), encouraging the model
to learn this distinction in order to lower the LM loss on future tokens. We
corroborate this pattern with experiments in pretrained language models.
Finally, in the toy setting we observe a two-phase learning dynamic: networks
first memorize individual factual associations in a few steps, then -- over a
longer horizon -- learn to linearly separate true from false, which in turn
lowers language-modeling loss. Together, these results provide both a
mechanistic demonstration and an empirical motivation for how and why linear
truth representations can emerge in language models.