ChatPaper.aiChatPaper

MinorBench:一個手動構建的針對兒童內容風險的基準測試

MinorBench: A hand-built benchmark for content-based risks for children

March 13, 2025
作者: Shaun Khoo, Gabriel Chua, Rachel Shong
cs.AI

摘要

大型語言模型(LLMs)正迅速進入兒童的生活——通過家長的主動採用、學校教育以及同儕網絡——然而,當前的AI倫理與安全研究並未充分解決針對未成年人的特定內容相關風險。本文中,我們通過一個基於LLM的聊天機器人在中學環境中的實際案例研究,揭示了學生們如何使用乃至有時誤用該系統,從而凸顯這些不足。基於這些發現,我們提出了一個新的未成年人內容風險分類法,並介紹了MinorBench,這是一個開源基準,旨在評估LLMs在拒絕來自兒童的不安全或不恰當查詢方面的能力。我們在不同系統提示下評估了六個知名LLMs,展示了它們在兒童安全合規性上的顯著差異。我們的研究結果為構建更為堅固、以兒童為中心的安全機制提供了實用步驟,並強調了定制AI系統以保護年輕用戶的緊迫性。
English
Large Language Models (LLMs) are rapidly entering children's lives - through parent-driven adoption, schools, and peer networks - yet current AI ethics and safety research do not adequately address content-related risks specific to minors. In this paper, we highlight these gaps with a real-world case study of an LLM-based chatbot deployed in a middle school setting, revealing how students used and sometimes misused the system. Building on these findings, we propose a new taxonomy of content-based risks for minors and introduce MinorBench, an open-source benchmark designed to evaluate LLMs on their ability to refuse unsafe or inappropriate queries from children. We evaluate six prominent LLMs under different system prompts, demonstrating substantial variability in their child-safety compliance. Our results inform practical steps for more robust, child-focused safety mechanisms and underscore the urgency of tailoring AI systems to safeguard young users.

Summary

AI-Generated Summary

PDF43March 14, 2025