ChatPaper.aiChatPaper

大语言模型有情感吗?基于提示、检索与课程学习的情感识别教学

Do LLMs Feel? Teaching Emotion Recognition with Prompts, Retrieval, and Curriculum Learning

November 10, 2025
作者: Xinran Li, Xiujuan Xu, Jiaqi Qiao, Yu Liu
cs.AI

摘要

对话情绪识别(ERC)是理解人类情绪并实现自然人机交互的关键任务。尽管大语言模型(LLMs)近期在该领域展现出巨大潜力,但其捕捉显性情绪与隐性情绪内在联系的能力仍存在局限。我们提出了一种新颖的ERC训练框架PRC-Emo,该框架融合提示工程、示例检索和课程学习三大模块,旨在探究LLMs能否有效感知对话情境中的情绪。具体而言,我们基于显隐性情绪线索设计情绪敏感型提示模板,以更好地引导模型理解说话者的心理状态;构建了首个专用于ERC的示例检索库,其中既包含广泛使用数据集中的训练样本,也有LLMs生成并经人工校验的高质量对话实例;此外,我们在LoRA微调过程中引入课程学习策略,通过量化同一说话者与不同说话者话语间的加权情绪变化来划分对话样本难度等级,进而按由易到难的顺序组织训练。在IEMOCAP和MELD两个基准数据集上的实验结果表明,我们的方法取得了最新的最优性能,证明了该框架在增强基于LLM的情绪理解能力方面的有效性与泛化性。
English
Emotion Recognition in Conversation (ERC) is a crucial task for understanding human emotions and enabling natural human-computer interaction. Although Large Language Models (LLMs) have recently shown great potential in this field, their ability to capture the intrinsic connections between explicit and implicit emotions remains limited. We propose a novel ERC training framework, PRC-Emo, which integrates Prompt engineering, demonstration Retrieval, and Curriculum learning, with the goal of exploring whether LLMs can effectively perceive emotions in conversational contexts. Specifically, we design emotion-sensitive prompt templates based on both explicit and implicit emotional cues to better guide the model in understanding the speaker's psychological states. We construct the first dedicated demonstration retrieval repository for ERC, which includes training samples from widely used datasets, as well as high-quality dialogue examples generated by LLMs and manually verified. Moreover, we introduce a curriculum learning strategy into the LoRA fine-tuning process, incorporating weighted emotional shifts between same-speaker and different-speaker utterances to assign difficulty levels to dialogue samples, which are then organized in an easy-to-hard training sequence. Experimental results on two benchmark datasets-- IEMOCAP and MELD --show that our method achieves new state-of-the-art (SOTA) performance, demonstrating the effectiveness and generalizability of our approach in improving LLM-based emotional understanding.
PDF32December 2, 2025