ChatPaper.aiChatPaper

SafeGRPO:基于规则引导策略优化的自奖励多模态安全对齐

SafeGRPO: Self-Rewarded Multimodal Safety Alignment via Rule-Governed Policy Optimization

November 17, 2025
作者: Xuankun Rong, Wenke Huang, Tingfeng Wang, Daiguo Zhou, Bo Du, Mang Ye
cs.AI

摘要

多模态大语言模型(MLLMs)已展现出卓越的推理与指令跟随能力,但其扩展的多模态空间引入了由复杂图文交互产生的新型组合式安全风险。这种跨模态耦合即使在各输入内容无害时仍可能生成不安全语义,暴露出当前MLLMs脆弱的安全意识。尽管近期研究通过引导模型推理潜在风险来增强安全性,但未经规制的推理轨迹可能破坏对齐效果;虽然群体相对策略优化(GRPO)可实现无需人工监督的自奖励优化,但其缺乏可验证的推理安全信号。为此,我们提出SafeGRPO——一种融合规则化奖励构建机制的自奖励多模态安全对齐框架,通过将可解释的奖励建构融入GRPO,实现可验证的推理安全优化。基于构建的包含显式视觉、文本及组合安全标签的SafeTag-VL-3K数据集,SafeGRPO执行步骤引导的安全思维以强化结构化推理和行为对齐,在保持通用能力的同时,显著提升了多模态安全意识、组合鲁棒性及跨基准测试的推理稳定性。
English
Multimodal large language models (MLLMs) have demonstrated impressive reasoning and instruction-following capabilities, yet their expanded modality space introduces new compositional safety risks that emerge from complex text-image interactions. Such cross-modal couplings can produce unsafe semantics even when individual inputs are benign, exposing the fragile safety awareness of current MLLMs. While recent works enhance safety by guiding models to reason about potential risks, unregulated reasoning traces may compromise alignment; although Group Relative Policy Optimization (GRPO) offers self-rewarded refinement without human supervision, it lacks verifiable signals for reasoning safety. To address this, we propose SafeGRPO a self-rewarded multimodal safety alignment framework that integrates rule-governed reward construction into GRPO, enabling interpretable and verifiable optimization of reasoning safety. Built upon the constructed SafeTag-VL-3K dataset with explicit visual, textual, and combined safety tags, SafeGRPO performs step-guided safety thinking to enforce structured reasoning and behavior alignment, substantially improving multimodal safety awareness, compositional robustness, and reasoning stability across diverse benchmarks without sacrificing general capabilities.
PDF32December 1, 2025