DeMeVa在LeWiDi-2025:基于上下文学习与标签分布学习的多视角建模
DeMeVa at LeWiDi-2025: Modeling Perspectives with In-Context Learning and Label Distribution Learning
September 11, 2025
作者: Daniil Ignatev, Nan Li, Hugh Mee Wong, Anh Dang, Shane Kaszefski Yaschuk
cs.AI
摘要
本系统论文介绍了DeMeVa团队在第三届“学习分歧”共享任务(LeWiDi 2025;Leonardelli等人,2025)中的研究方法。我们探索了两个方向:一是基于大语言模型的上下文学习(ICL),在此我们对比了不同的示例采样策略;二是结合RoBERTa(Liu等人,2019b)的标签分布学习(LDL)方法,我们评估了多种微调技术。我们的贡献主要体现在两个方面:(1)我们证明了ICL能够有效预测特定标注者的注释(即视角主义注释),并且将这些预测聚合为软标签后,能取得具有竞争力的性能;(2)我们提出LDL方法在软标签预测方面展现出潜力,值得视角主义研究社区进一步深入探索。
English
This system paper presents the DeMeVa team's approaches to the third edition
of the Learning with Disagreements shared task (LeWiDi 2025; Leonardelli et
al., 2025). We explore two directions: in-context learning (ICL) with large
language models, where we compare example sampling strategies; and label
distribution learning (LDL) methods with RoBERTa (Liu et al., 2019b), where we
evaluate several fine-tuning methods. Our contributions are twofold: (1) we
show that ICL can effectively predict annotator-specific annotations
(perspectivist annotations), and that aggregating these predictions into soft
labels yields competitive performance; and (2) we argue that LDL methods are
promising for soft label predictions and merit further exploration by the
perspectivist community.