ChatPaper.aiChatPaper

Doc-PP:面向大视觉语言模型的文档策略保持基准测试

Doc-PP: Document Policy Preservation Benchmark for Large Vision-Language Models

January 7, 2026
作者: Haeun Jang, Hwan Chang, Hwanhee Lee
cs.AI

摘要

大型视觉语言模型(LVLM)在现实文档问答任务中的部署,常受限于根据上下文动态制定的用户自定义信息披露策略。尽管确保遵守这些显式约束至关重要,但现有安全研究主要聚焦于隐式社会规范或纯文本场景,忽视了多模态文档的复杂性。本文提出Doc-PP(文档策略保持基准),该创新基准基于真实世界报告构建,要求在多模态视觉与文本元素间进行严格非披露政策下的推理。我们的评估揭示了一个系统性的"推理诱发安全漏洞":当答案需通过复杂合成或跨模态聚合推断时,模型频繁泄露敏感信息,从而有效规避现有安全约束。此外,我们发现提供提取文本虽能提升感知能力,却无意中助长了信息泄露。针对这些漏洞,我们提出DVA(分解-验证-聚合)框架,该结构化推理框架将推理过程与策略验证解耦。实验结果表明,DVA显著优于标准提示防御方法,为合规文档理解提供了鲁棒的基准方案。
English
The deployment of Large Vision-Language Models (LVLMs) for real-world document question answering is often constrained by dynamic, user-defined policies that dictate information disclosure based on context. While ensuring adherence to these explicit constraints is critical, existing safety research primarily focuses on implicit social norms or text-only settings, overlooking the complexities of multimodal documents. In this paper, we introduce Doc-PP (Document Policy Preservation Benchmark), a novel benchmark constructed from real-world reports requiring reasoning across heterogeneous visual and textual elements under strict non-disclosure policies. Our evaluation highlights a systemic Reasoning-Induced Safety Gap: models frequently leak sensitive information when answers must be inferred through complex synthesis or aggregated across modalities, effectively circumventing existing safety constraints. Furthermore, we identify that providing extracted text improves perception but inadvertently facilitates leakage. To address these vulnerabilities, we propose DVA (Decompose-Verify-Aggregation), a structural inference framework that decouples reasoning from policy verification. Experimental results demonstrate that DVA significantly outperforms standard prompting defenses, offering a robust baseline for policy-compliant document understanding
PDF02January 16, 2026