ChatPaper.aiChatPaper

WalledEval:針對大型語言模型的全面安全評估工具包

WalledEval: A Comprehensive Safety Evaluation Toolkit for Large Language Models

August 7, 2024
作者: Prannaya Gupta, Le Qi Yau, Hao Han Low, I-Shiang Lee, Hugo Maximus Lim, Yu Xin Teoh, Jia Hng Koh, Dar Win Liew, Rishabh Bhardwaj, Rajat Bhardwaj, Soujanya Poria
cs.AI

摘要

WalledEval是一個全面的人工智慧安全測試工具包,旨在評估大型語言模型(LLMs)。它支持各種模型,包括開放權重和基於API的模型,並提供超過35個安全基準,涵蓋多語言安全、誇大安全和提示注入等領域。該框架支持LLM和評審基準測試,並整合了自定義變異器,以測試針對各種文本風格變異(如未來時態和改寫)的安全性。此外,WalledEval還引入了WalledGuard,一個新的、小型且高效的內容審查工具,以及SGXSTest,用於評估文化背景下誇大安全性的基準。我們將WalledEval公開提供,網址為https://github.com/walledai/walledevalA。
English
WalledEval is a comprehensive AI safety testing toolkit designed to evaluate large language models (LLMs). It accommodates a diverse range of models, including both open-weight and API-based ones, and features over 35 safety benchmarks covering areas such as multilingual safety, exaggerated safety, and prompt injections. The framework supports both LLM and judge benchmarking, and incorporates custom mutators to test safety against various text-style mutations such as future tense and paraphrasing. Additionally, WalledEval introduces WalledGuard, a new, small and performant content moderation tool, and SGXSTest, a benchmark for assessing exaggerated safety in cultural contexts. We make WalledEval publicly available at https://github.com/walledai/walledevalA.

Summary

AI-Generated Summary

PDF183November 28, 2024