ChatPaper.aiChatPaper

OMNIGUARD:一种跨模态AI安全监管的高效方法

OMNIGUARD: An Efficient Approach for AI Safety Moderation Across Modalities

May 29, 2025
作者: Sahil Verma, Keegan Hines, Jeff Bilmes, Charlotte Siska, Luke Zettlemoyer, Hila Gonen, Chandan Singh
cs.AI

摘要

大型语言模型(LLMs)的涌现能力引发了对其潜在有害滥用的担忧。应对这一问题的核心策略在于检测针对模型的有害查询。现有的检测方法存在缺陷,尤其易受利用模型能力泛化不匹配的攻击影响(例如,使用低资源语言的提示或通过非文本模态如图像和音频提供的提示)。为应对这一挑战,我们提出了OMNIGUARD,一种跨语言和跨模态检测有害提示的方法。该方法(i)识别LLM/MLLM中跨语言或跨模态对齐的内部表示,随后(ii)利用这些表示构建一个语言无关或模态无关的分类器,用于检测有害提示。在多语言环境下,OMNIGUARD将有害提示分类准确率提升了11.57%,对于基于图像的提示提升了20.44%,并在基于音频的提示上设立了新的SOTA(当前最优)。通过重新利用生成过程中计算的嵌入,OMNIGUARD还实现了极高的效率(比次快基线快约120倍)。代码和数据可在以下网址获取:https://github.com/vsahil/OmniGuard。
English
The emerging capabilities of large language models (LLMs) have sparked concerns about their immediate potential for harmful misuse. The core approach to mitigate these concerns is the detection of harmful queries to the model. Current detection approaches are fallible, and are particularly susceptible to attacks that exploit mismatched generalization of model capabilities (e.g., prompts in low-resource languages or prompts provided in non-text modalities such as image and audio). To tackle this challenge, we propose OMNIGUARD, an approach for detecting harmful prompts across languages and modalities. Our approach (i) identifies internal representations of an LLM/MLLM that are aligned across languages or modalities and then (ii) uses them to build a language-agnostic or modality-agnostic classifier for detecting harmful prompts. OMNIGUARD improves harmful prompt classification accuracy by 11.57\% over the strongest baseline in a multilingual setting, by 20.44\% for image-based prompts, and sets a new SOTA for audio-based prompts. By repurposing embeddings computed during generation, OMNIGUARD is also very efficient (approx 120 times faster than the next fastest baseline). Code and data are available at: https://github.com/vsahil/OmniGuard.

Summary

AI-Generated Summary

PDF22June 2, 2025