大型语言模型中的稀疏奖励子系统
Sparse Reward Subsystem in Large Language Models
February 1, 2026
作者: Guowei Xu, Mert Yuksekgonul, James Zou
cs.AI
摘要
本文在大型语言模型的隐层状态中发现了一种稀疏奖励子系统,该机制与人类大脑中的生物奖励系统具有功能相似性。我们证明该子系统中存在表征模型内部状态价值期望的价值神经元,并通过干预实验验证了这些神经元对推理过程的关键作用。实验表明,这些价值神经元在不同数据集、模型规模和架构下均保持稳定性,且在基于同一基础模型微调的不同模型和数据集间表现出显著的迁移性。通过分析价值预测与实际奖励出现偏差的案例,我们在奖励子系统中识别出编码奖励预测误差的多巴胺神经元——当实际奖励高于预期时这些神经元激活度升高,低于预期时则激活度降低。
English
In this paper, we identify a sparse reward subsystem within the hidden states of Large Language Models (LLMs), drawing an analogy to the biological reward subsystem in the human brain. We demonstrate that this subsystem contains value neurons that represent the model's internal expectation of state value, and through intervention experiments, we establish the importance of these neurons for reasoning. Our experiments reveal that these value neurons are robust across diverse datasets, model scales, and architectures; furthermore, they exhibit significant transferability across different datasets and models fine-tuned from the same base model. By examining cases where value predictions and actual rewards diverge, we identify dopamine neurons within the reward subsystem which encode reward prediction errors (RPE). These neurons exhibit high activation when the reward is higher than expected and low activation when the reward is lower than expected.