DynaGuard:一種支援用戶自定義策略的動態防護欄模型
DynaGuard: A Dynamic Guardrail Model With User-Defined Policies
September 2, 2025
作者: Monte Hoover, Vatsal Baherwani, Neel Jain, Khalid Saifullah, Joseph Vincent, Chirag Jain, Melissa Kazemi Rad, C. Bayan Bruss, Ashwinee Panda, Tom Goldstein
cs.AI
摘要
守護者模型用於監督和調節面向用戶的聊天機器人輸出,實施防護措施並檢測不良行為。標準的守護者模型如LlamaGuard檢測預定義的靜態危害類別。我們提出動態守護者模型,其基於用戶自定義策略評估文本,使其適用於標準守護者模型未涵蓋的不同應用領域。我們的動態守護者模型可用於快速檢測策略違規,或結合思維鏈推理來闡述和證明模型輸出的合理性。在檢測靜態危害類別的準確性上,我們的動態守護者模型與靜態模型相當,同時在識別自由形式策略違規方面,其準確性可媲美前沿推理模型,且僅需其一小部分時間。
English
Guardian models are used to supervise and moderate the outputs of user-facing
chatbots, enforcing guardrails and detecting bad behaviors. Standard guardian
models like LlamaGuard detect predefined, static categories of harms. We
propose dynamic guardian models that evaluate text based on user-defined
policies, making them useful for different application domains that are not
addressed by standard guardian models. Our dynamic guardian models can be used
for fast detection of policy violations or with chain-of-thought reasoning that
articulates and justifies the model outputs. Our dynamic guardian models match
static models in detection accuracy for static harm categories while
identifying violations of free-form policies with accuracy comparable to
frontier reasoning models in a fraction of the time.