DeMeVa在LeWiDi-2025:利用上下文學習與標籤分佈學習建模多視角
DeMeVa at LeWiDi-2025: Modeling Perspectives with In-Context Learning and Label Distribution Learning
September 11, 2025
作者: Daniil Ignatev, Nan Li, Hugh Mee Wong, Anh Dang, Shane Kaszefski Yaschuk
cs.AI
摘要
本系统论文介绍了DeMeVa团队针对第三届"学习分歧"共享任务(LeWiDi 2025;Leonardelli等人,2025)所采用的方法。我们探索了两个方向:一是基于大语言模型的上下文学习(ICL),在此我们比较了不同的示例采样策略;二是基于RoBERTa(Liu等人,2019b)的标签分布学习(LDL)方法,我们评估了多种微调方法。我们的贡献主要体现在两个方面:(1)我们证明了ICL能够有效预测特定标注者的注释(视角主义注释),并且将这些预测聚合为软标签可以获得具有竞争力的性能;(2)我们认为LDL方法在软标签预测方面具有潜力,值得视角主义社区进一步探索。
English
This system paper presents the DeMeVa team's approaches to the third edition
of the Learning with Disagreements shared task (LeWiDi 2025; Leonardelli et
al., 2025). We explore two directions: in-context learning (ICL) with large
language models, where we compare example sampling strategies; and label
distribution learning (LDL) methods with RoBERTa (Liu et al., 2019b), where we
evaluate several fine-tuning methods. Our contributions are twofold: (1) we
show that ICL can effectively predict annotator-specific annotations
(perspectivist annotations), and that aggregating these predictions into soft
labels yields competitive performance; and (2) we argue that LDL methods are
promising for soft label predictions and merit further exploration by the
perspectivist community.