ChatPaper.aiChatPaper

通过推理与强化学习实现大语言模型的情境完整性

Contextual Integrity in LLMs via Reasoning and Reinforcement Learning

May 29, 2025
作者: Guangchen Lan, Huseyin A. Inan, Sahar Abdelnabi, Janardhan Kulkarni, Lukas Wutschitz, Reza Shokri, Christopher G. Brinton, Robert Sim
cs.AI

摘要

随着自主代理代表用户做出决策的时代到来,确保情境完整性(Contextual Integrity, CI)——即在执行特定任务时分享何种信息是恰当的——成为了该领域的核心问题。我们提出,CI要求一种推理形式,即代理需要对其所处的情境进行推理。为验证这一点,我们首先引导大型语言模型(LLMs)在决定披露哪些信息时,明确地对CI进行推理。随后,我们通过开发一个强化学习(RL)框架,进一步在模型中灌输实现CI所需的推理能力。利用一个仅包含约700个样本但涵盖多样化情境和信息披露规范的合成、自动生成的数据集,我们展示了该方法在保持任务性能的同时,显著减少了不恰当的信息披露,且这一改进适用于多种模型规模和系列。尤为重要的是,从这一合成数据集获得的改进能够迁移至如PrivacyLens等已建立的CI基准测试中,后者通过人工标注评估AI助手在行动和工具调用中的隐私泄露情况。
English
As the era of autonomous agents making decisions on behalf of users unfolds, ensuring contextual integrity (CI) -- what is the appropriate information to share while carrying out a certain task -- becomes a central question to the field. We posit that CI demands a form of reasoning where the agent needs to reason about the context in which it is operating. To test this, we first prompt LLMs to reason explicitly about CI when deciding what information to disclose. We then extend this approach by developing a reinforcement learning (RL) framework that further instills in models the reasoning necessary to achieve CI. Using a synthetic, automatically created, dataset of only sim700 examples but with diverse contexts and information disclosure norms, we show that our method substantially reduces inappropriate information disclosure while maintaining task performance across multiple model sizes and families. Importantly, improvements transfer from this synthetic dataset to established CI benchmarks such as PrivacyLens that has human annotations and evaluates privacy leakage of AI assistants in actions and tool calls.
PDF41June 6, 2025