SweEval:大语言模型真的会说脏话吗?面向企业应用的安全基准测试
SweEval: Do LLMs Really Swear? A Safety Benchmark for Testing Limits for Enterprise Use
May 22, 2025
作者: Hitesh Laxmichand Patel, Amit Agarwal, Arion Das, Bhargava Kumar, Srikant Panda, Priyaranjan Pattnayak, Taki Hasan Rafi, Tejaswini Kumar, Dong-Kyu Chae
cs.AI
摘要
企业客户正日益采用大型语言模型(LLMs)来完成关键沟通任务,如撰写电子邮件、构思销售提案以及编写日常消息。要在不同地区部署此类模型,要求它们能够理解多元的文化与语言背景,并生成安全且得体的回应。对于企业应用而言,有效识别并处理不安全或冒犯性语言,以降低声誉风险、维护信任并确保合规性,显得尤为重要。为此,我们推出了SweEval基准测试,该测试通过模拟现实场景,涵盖语气(积极或消极)和语境(正式或非正式)的变化,明确指示模型在完成任务时包含特定粗俗词汇。此基准旨在评估LLMs在面对此类不当指令时是遵循还是抵制,并检验其与伦理框架、文化细微差别及语言理解能力的契合度。为了推动构建符合伦理的AI系统研究,适用于企业及其他领域,我们公开了数据集与代码:https://github.com/amitbcp/multilingual_profanity。
English
Enterprise customers are increasingly adopting Large Language Models (LLMs)
for critical communication tasks, such as drafting emails, crafting sales
pitches, and composing casual messages. Deploying such models across different
regions requires them to understand diverse cultural and linguistic contexts
and generate safe and respectful responses. For enterprise applications, it is
crucial to mitigate reputational risks, maintain trust, and ensure compliance
by effectively identifying and handling unsafe or offensive language. To
address this, we introduce SweEval, a benchmark simulating real-world scenarios
with variations in tone (positive or negative) and context (formal or
informal). The prompts explicitly instruct the model to include specific swear
words while completing the task. This benchmark evaluates whether LLMs comply
with or resist such inappropriate instructions and assesses their alignment
with ethical frameworks, cultural nuances, and language comprehension
capabilities. In order to advance research in building ethically aligned AI
systems for enterprise use and beyond, we release the dataset and code:
https://github.com/amitbcp/multilingual_profanity.