ChatPaper.aiChatPaper

个性化陷阱:用户记忆如何改变大语言模型中的情感推理

The Personalization Trap: How User Memory Alters Emotional Reasoning in LLMs

October 10, 2025
作者: Xi Fang, Weijie Xu, Yuchong Zhang, Stephanie Eckman, Scott Nickleach, Chandan K. Reddy
cs.AI

摘要

当AI助手记住Sarah是一位身兼两份工作的单亲母亲时,它对其压力的理解是否会与面对一位富裕高管时有所不同?随着个性化AI系统日益融入长期用户记忆,理解这种记忆如何塑造情感推理变得至关重要。我们通过评估15个大型语言模型(LLMs)在人类验证的情感智力测试上的表现,探究了用户记忆如何影响LLMs的情感智能。研究发现,相同情境搭配不同用户档案时,会产生系统性的情感解读差异。在已验证的独立于用户的情感场景及多样化的用户档案中,多个高性能LLMs显现出系统性偏见,即优势群体档案获得了更准确的情感解读。此外,LLMs在情感理解与支持性建议任务中,跨人口统计因素表现出显著差异,这表明个性化机制可能将社会等级嵌入模型的情感推理之中。这些结果凸显了记忆增强型AI面临的一个关键挑战:旨在实现个性化的系统可能无意中强化了社会不平等。
English
When an AI assistant remembers that Sarah is a single mother working two jobs, does it interpret her stress differently than if she were a wealthy executive? As personalized AI systems increasingly incorporate long-term user memory, understanding how this memory shapes emotional reasoning is critical. We investigate how user memory affects emotional intelligence in large language models (LLMs) by evaluating 15 models on human validated emotional intelligence tests. We find that identical scenarios paired with different user profiles produce systematically divergent emotional interpretations. Across validated user independent emotional scenarios and diverse user profiles, systematic biases emerged in several high-performing LLMs where advantaged profiles received more accurate emotional interpretations. Moreover, LLMs demonstrate significant disparities across demographic factors in emotion understanding and supportive recommendations tasks, indicating that personalization mechanisms can embed social hierarchies into models emotional reasoning. These results highlight a key challenge for memory enhanced AI: systems designed for personalization may inadvertently reinforce social inequalities.
PDF64October 14, 2025