ChatPaper.aiChatPaper

在注重隐私的助手中实现情境完整性

Operationalizing Contextual Integrity in Privacy-Conscious Assistants

August 5, 2024
作者: Sahra Ghalebikesabi, Eugene Bagdasaryan, Ren Yi, Itay Yona, Ilia Shumailov, Aneesh Pappu, Chongyang Shi, Laura Weidinger, Robert Stanforth, Leonard Berrada, Pushmeet Kohli, Po-Sen Huang, Borja Balle
cs.AI

摘要

先进的人工智能助手结合前沿的LLMs和工具访问,可以自主地代表用户执行复杂任务。虽然这类助手的帮助性可以通过访问用户信息(包括电子邮件和文档)大幅提升,但这也带来了隐私方面的担忧,即助手在没有用户监督的情况下与第三方分享不当信息。为了引导信息共享助手按照隐私期望行事,我们提出将情境完整性(CI)操作化,这是一个将隐私与特定情境中信息适当流动相提并论的框架。具体而言,我们设计并评估了多种策略,以引导助手的信息共享行为符合CI的要求。我们的评估基于一个由合成数据和人类注释组成的新颖表单填充基准,结果显示,促使前沿LLMs进行基于CI的推理产生了良好的效果。
English
Advanced AI assistants combine frontier LLMs and tool access to autonomously perform complex tasks on behalf of users. While the helpfulness of such assistants can increase dramatically with access to user information including emails and documents, this raises privacy concerns about assistants sharing inappropriate information with third parties without user supervision. To steer information-sharing assistants to behave in accordance with privacy expectations, we propose to operationalize contextual integrity (CI), a framework that equates privacy with the appropriate flow of information in a given context. In particular, we design and evaluate a number of strategies to steer assistants' information-sharing actions to be CI compliant. Our evaluation is based on a novel form filling benchmark composed of synthetic data and human annotations, and it reveals that prompting frontier LLMs to perform CI-based reasoning yields strong results.

Summary

AI-Generated Summary

PDF52November 28, 2024