ChatPaper.aiChatPaper

论成员推断在版权审计中的证据局限

On the Evidentiary Limits of Membership Inference for Copyright Auditing

January 19, 2026
作者: Murat Bilgehan Ertan, Emirhan Böge, Min Chen, Kaleel Mahmood, Marten van Dijk
cs.AI

摘要

随着大型语言模型(LLM)在日益不透明的语料库上接受训练,尽管在实际条件下其可靠性备受质疑,研究者仍提出成员推理攻击(MIA)来审计训练过程中是否使用了受版权保护的文本。本文探讨在对抗性版权纠纷中,当被指控的模型开发者可能对训练数据进行语义保留的模糊化处理时,MIA能否作为可采信证据,并通过法官-公诉人-被告的通信协议将这一场景形式化。为检验该协议下的鲁棒性,我们提出SAGE(结构感知的稀疏自编码器引导提取框架),该框架基于稀疏自编码器(SAE)引导的释义方法,能在保留语义内容与下游效用的前提下重写训练数据的词汇结构。实验表明,当模型在SAGE生成的释义文本上进行微调时,最先进的MIA效果显著下降,这显示其信号对语义保持的转换缺乏鲁棒性。尽管在某些微调机制中仍存在部分信息泄漏,但这些结果表明MIA在对抗性环境中具有脆弱性,无法单独作为LLM版权审计的独立机制。
English
As large language models (LLMs) are trained on increasingly opaque corpora, membership inference attacks (MIAs) have been proposed to audit whether copyrighted texts were used during training, despite growing concerns about their reliability under realistic conditions. We ask whether MIAs can serve as admissible evidence in adversarial copyright disputes where an accused model developer may obfuscate training data while preserving semantic content, and formalize this setting through a judge-prosecutor-accused communication protocol. To test robustness under this protocol, we introduce SAGE (Structure-Aware SAE-Guided Extraction), a paraphrasing framework guided by Sparse Autoencoders (SAEs) that rewrites training data to alter lexical structure while preserving semantic content and downstream utility. Our experiments show that state-of-the-art MIAs degrade when models are fine-tuned on SAGE-generated paraphrases, indicating that their signals are not robust to semantics-preserving transformations. While some leakage remains in certain fine-tuning regimes, these results suggest that MIAs are brittle in adversarial settings and insufficient, on their own, as a standalone mechanism for copyright auditing of LLMs.
PDF21January 22, 2026