面向价值驱动型大语言模型智能体的情境-价值-行动架构
Context-Value-Action Architecture for Value-Driven Large Language Model Agents
April 7, 2026
作者: TianZe Zhang, Sirui Sun, Yuhang Xie, Xin Zhang, Zhiqiang Wu, Guojie Song
cs.AI
摘要
大型语言模型(LLMs)在模拟人类行为方面展现出潜力,但现有智能体常表现出行为僵化问题,这一缺陷常被当前"以LLM为评判者"的自我指涉式评估偏差所掩盖。通过基于实证基准的评估,我们揭示了一个反直觉现象:增强提示驱动的推理强度不仅无法提升行为拟真度,反而会加剧价值极化,导致群体多样性崩塌。为解决此问题,我们提出基于刺激-机体-反应(S-O-R)模型和施瓦茨基本人类价值理论的语境-价值-行动(CVA)架构。与依赖自我验证的方法不同,CVA通过基于真实人类数据训练的新型价值验证器,将行动生成与认知推理解耦,显式建模动态价值激活机制。在包含110余万条真实交互轨迹的CVABench上的实验表明,CVA显著优于基线方法。我们的方法在有效缓解价值极化的同时,提供了更优的行为拟真度与可解释性。
English
Large Language Models (LLMs) have shown promise in simulating human behavior, yet existing agents often exhibit behavioral rigidity, a flaw frequently masked by the self-referential bias of current "LLM-as-a-judge" evaluations. By evaluating against empirical ground truth, we reveal a counter-intuitive phenomenon: increasing the intensity of prompt-driven reasoning does not enhance fidelity but rather exacerbates value polarization, collapsing population diversity. To address this, we propose the Context-Value-Action (CVA) architecture, grounded in the Stimulus-Organism-Response (S-O-R) model and Schwartz's Theory of Basic Human Values. Unlike methods relying on self-verification, CVA decouples action generation from cognitive reasoning via a novel Value Verifier trained on authentic human data to explicitly model dynamic value activation. Experiments on CVABench, which comprises over 1.1 million real-world interaction traces, demonstrate that CVA significantly outperforms baselines. Our approach effectively mitigates polarization while offering superior behavioral fidelity and interpretability.