LLM在迴路中:構建PARADEHATE數據集以實現仇恨言論淨化
LLM in the Loop: Creating the PARADEHATE Dataset for Hate Speech Detoxification
June 2, 2025
作者: Shuzhou Yuan, Ercong Nie, Lukas Kouba, Ashish Yashwanth Kangen, Helmut Schmid, Hinrich Schutze, Michael Farber
cs.AI
摘要
在網絡有害內容日益增多的背景下,去毒化——即重寫有害語言為無毒文本的任務——變得愈發重要。然而,由於人工標註的成本和敏感性,尤其是針對仇恨言論的高質量平行數據集仍然稀缺。本文提出了一種新穎的基於GPT-4o-mini的大型語言模型(LLM)在線管道,用於自動化去毒化。我們首先通過用LLM替代人工標註者來複製ParaDetox管道,並展示LLM的表現與人工標註相當。在此基礎上,我們構建了PARADEHATE,這是一個專門用於仇恨言論去毒化的大規模平行數據集。我們發布了PARADEHATE作為一個包含超過8K仇恨/非仇恨文本對的基準,並評估了多種基線方法。實驗結果表明,如BART等模型在PARADEHATE上進行微調後,在風格準確性、內容保留和流暢性方面表現更佳,證明了LLM生成去毒化文本作為人工標註的可擴展替代方案的有效性。
English
Detoxification, the task of rewriting harmful language into non-toxic text,
has become increasingly important amid the growing prevalence of toxic content
online. However, high-quality parallel datasets for detoxification, especially
for hate speech, remain scarce due to the cost and sensitivity of human
annotation. In this paper, we propose a novel LLM-in-the-loop pipeline
leveraging GPT-4o-mini for automated detoxification. We first replicate the
ParaDetox pipeline by replacing human annotators with an LLM and show that the
LLM performs comparably to human annotation. Building on this, we construct
PARADEHATE, a large-scale parallel dataset specifically for hatespeech
detoxification. We release PARADEHATE as a benchmark of over 8K hate/non-hate
text pairs and evaluate a wide range of baseline methods. Experimental results
show that models such as BART, fine-tuned on PARADEHATE, achieve better
performance in style accuracy, content preservation, and fluency, demonstrating
the effectiveness of LLM-generated detoxification text as a scalable
alternative to human annotation.