助手轴:语言模型默认人格的定位与稳定化
The Assistant Axis: Situating and Stabilizing the Default Persona of Language Models
January 15, 2026
作者: Christina Lu, Jack Gallagher, Jonathan Michala, Kyle Fish, Jack Lindsey
cs.AI
摘要
大型语言模型能够呈现多种角色特征,但通常默认展现的是经过后训练培养出的"助手"身份。我们通过提取对应不同角色原型的激活方向,探究了模型角色空间的结构。在多个不同模型中的实验表明,该角色空间的主导成分是一个"助手轴",它捕捉了模型在其默认助手模式下运行的程度。向助手方向引导会强化有益无害的行为;而偏离该方向则会增强模型认同其他实体的倾向。此外,采用更极端的偏离值通常会诱发神秘戏剧化的表达风格。研究发现该轴线在预训练模型中同样存在,主要促进像顾问、教练这类有益的人类原型,同时抑制精神类原型。通过测量沿助手轴的偏离程度,可以预测"角色漂移"现象——即模型偏离其典型角色特征,表现出有害或异常行为。我们发现角色漂移往往由两种对话情境驱动:要求模型对其处理过程进行元反思的对话,以及涉及情感脆弱用户的对话。实验表明,将激活限制在助手轴的固定区域内,能在上述场景中稳定模型行为——同时也能抵御基于角色攻击的越狱行为。我们的研究结果表明,后训练虽将模型导向角色空间的特定区域,但仅实现了松散的锚定,这启示我们需要开发能更深入地将模型锚定于连贯角色的训练与引导策略。
English
Large language models can represent a variety of personas but typically default to a helpful Assistant identity cultivated during post-training. We investigate the structure of the space of model personas by extracting activation directions corresponding to diverse character archetypes. Across several different models, we find that the leading component of this persona space is an "Assistant Axis," which captures the extent to which a model is operating in its default Assistant mode. Steering towards the Assistant direction reinforces helpful and harmless behavior; steering away increases the model's tendency to identify as other entities. Moreover, steering away with more extreme values often induces a mystical, theatrical speaking style. We find this axis is also present in pre-trained models, where it primarily promotes helpful human archetypes like consultants and coaches and inhibits spiritual ones. Measuring deviations along the Assistant Axis predicts "persona drift," a phenomenon where models slip into exhibiting harmful or bizarre behaviors that are uncharacteristic of their typical persona. We find that persona drift is often driven by conversations demanding meta-reflection on the model's processes or featuring emotionally vulnerable users. We show that restricting activations to a fixed region along the Assistant Axis can stabilize model behavior in these scenarios -- and also in the face of adversarial persona-based jailbreaks. Our results suggest that post-training steers models toward a particular region of persona space but only loosely tethers them to it, motivating work on training and steering strategies that more deeply anchor models to a coherent persona.