ChatPaper.aiChatPaper

概念感知隐私机制:防御嵌入反演攻击的防护策略

Concept-Aware Privacy Mechanisms for Defending Embedding Inversion Attacks

February 6, 2026
作者: Yu-Che Tsai, Hsiang Hsiao, Kuan-Yu Chen, Shou-De Lin
cs.AI

摘要

文本嵌入技术虽赋能众多自然语言处理应用,却面临嵌入反演攻击带来的严重隐私风险——此类攻击可暴露敏感属性或重构原始文本。现有差分隐私防御方案假设嵌入维度具有均匀敏感性,导致噪声添加过量且效用受损。我们提出SPARSE这一面向用户的框架,实现文本嵌入中针对特定概念的隐私保护。该框架融合两大核心技术:(1) 通过可微分掩码学习识别用户自定义概念的隐私敏感维度;(2) 采用马氏机制施加基于维度敏感度校准的椭圆噪声。与传统球面噪声注入不同,SPARSE选择性扰动隐私敏感维度,同时保留非敏感语义。通过在六个数据集上对三种嵌入模型及攻击场景进行评估,SPARSE在降低隐私泄露的同时,相较最先进的差分隐私方法始终展现出更优的下游任务性能。
English
Text embeddings enable numerous NLP applications but face severe privacy risks from embedding inversion attacks, which can expose sensitive attributes or reconstruct raw text. Existing differential privacy defenses assume uniform sensitivity across embedding dimensions, leading to excessive noise and degraded utility. We propose SPARSE, a user-centric framework for concept-specific privacy protection in text embeddings. SPARSE combines (1) differentiable mask learning to identify privacy-sensitive dimensions for user-defined concepts, and (2) the Mahalanobis mechanism that applies elliptical noise calibrated by dimension sensitivity. Unlike traditional spherical noise injection, SPARSE selectively perturbs privacy-sensitive dimensions while preserving non-sensitive semantics. Evaluated across six datasets with three embedding models and attack scenarios, SPARSE consistently reduces privacy leakage while achieving superior downstream performance compared to state-of-the-art DP methods.
PDF12February 11, 2026