ChatPaper.aiChatPaper

因此,让我们用“侮辱性内容”来替代这一表述……从大型语言模型生成有害文本中汲取的教训

<think> So let's replace this phrase with insult... </think> Lessons learned from generation of toxic texts with LLMs

September 10, 2025
作者: Sergey Pletenev, Daniil Moskovskiy, Alexander Panchenko
cs.AI

摘要

现代大型语言模型(LLMs)在生成合成数据方面表现出色。然而,在诸如文本去毒等敏感领域,其性能尚未得到科学界的充分关注。本文探讨了利用LLM生成的合成有毒数据作为人类生成数据的替代方案,用于训练去毒模型的可能性。通过使用Llama 3和Qwen激活修补模型,我们为ParaDetox和SST-2数据集中的中性文本生成了合成有毒对应物。实验表明,基于合成数据微调的模型表现始终逊色于使用人类数据训练的模型,联合指标性能下降高达30%。根本原因被确定为关键的词汇多样性差距:LLMs使用少量重复的侮辱性词汇生成有毒内容,未能捕捉到人类毒性的细微差别和多样性。这些发现凸显了当前LLMs在该领域的局限性,并强调了多样化、人工标注数据在构建鲁棒去毒系统中的持续重要性。
English
Modern Large Language Models (LLMs) are excellent at generating synthetic data. However, their performance in sensitive domains such as text detoxification has not received proper attention from the scientific community. This paper explores the possibility of using LLM-generated synthetic toxic data as an alternative to human-generated data for training models for detoxification. Using Llama 3 and Qwen activation-patched models, we generated synthetic toxic counterparts for neutral texts from ParaDetox and SST-2 datasets. Our experiments show that models fine-tuned on synthetic data consistently perform worse than those trained on human data, with a drop in performance of up to 30% in joint metrics. The root cause is identified as a critical lexical diversity gap: LLMs generate toxic content using a small, repetitive vocabulary of insults that fails to capture the nuances and variety of human toxicity. These findings highlight the limitations of current LLMs in this domain and emphasize the continued importance of diverse, human-annotated data for building robust detoxification systems.
PDF82September 11, 2025