LLM参与循环:构建PARADEHATE数据集用于仇恨言论净化
LLM in the Loop: Creating the PARADEHATE Dataset for Hate Speech Detoxification
June 2, 2025
作者: Shuzhou Yuan, Ercong Nie, Lukas Kouba, Ashish Yashwanth Kangen, Helmut Schmid, Hinrich Schutze, Michael Farber
cs.AI
摘要
净化任务,即将有害语言重写为非毒性文本,在网络上毒性内容日益增多的背景下变得愈发重要。然而,高质量的并行净化数据集,尤其是针对仇恨言论的,由于人工标注的成本和敏感性,仍然稀缺。本文提出了一种新颖的LLM(大语言模型)循环管道,利用GPT-4o-mini实现自动化净化。我们首先通过用LLM替代人工标注者来复制ParaDetox管道,并展示LLM的表现与人工标注相当。在此基础上,我们构建了PARADEHATE,一个专门用于仇恨言论净化的大规模并行数据集。我们发布了包含超过8K仇恨/非仇恨文本对的PARADEHATE作为基准,并评估了多种基线方法。实验结果表明,如BART等模型,在PARADEHATE上微调后,在风格准确性、内容保留和流畅性方面表现更佳,证明了LLM生成的净化文本作为可扩展替代人工标注的有效性。
English
Detoxification, the task of rewriting harmful language into non-toxic text,
has become increasingly important amid the growing prevalence of toxic content
online. However, high-quality parallel datasets for detoxification, especially
for hate speech, remain scarce due to the cost and sensitivity of human
annotation. In this paper, we propose a novel LLM-in-the-loop pipeline
leveraging GPT-4o-mini for automated detoxification. We first replicate the
ParaDetox pipeline by replacing human annotators with an LLM and show that the
LLM performs comparably to human annotation. Building on this, we construct
PARADEHATE, a large-scale parallel dataset specifically for hatespeech
detoxification. We release PARADEHATE as a benchmark of over 8K hate/non-hate
text pairs and evaluate a wide range of baseline methods. Experimental results
show that models such as BART, fine-tuned on PARADEHATE, achieve better
performance in style accuracy, content preservation, and fluency, demonstrating
the effectiveness of LLM-generated detoxification text as a scalable
alternative to human annotation.Summary
AI-Generated Summary