微观缺陷揭露宏观伪造:通过局部分布偏移检测AI生成图像
Micro-Defects Expose Macro-Fakes: Detecting AI-Generated Images via Local Distributional Shifts
May 10, 2026
作者: Boxuan Zhang, Jianing Zhu, Qifan Wang, Jiang Liu, Ruixiang Tang
cs.AI
摘要
近年来的生成式模型能够生成高度逼真的图像,这使得区分真实图像与AI生成图像面临挑战。然而,现有基于预训练特征提取器的检测方法往往过度依赖全局语义信息,限制了对关键微缺陷的敏感度。本文提出微缺陷暴露宏观伪造(MDMF)——一种局部分布感知的检测框架,能将微尺度统计异常放大为宏观层面的分布差异。为避免局部取证线索被简单聚合稀释,我们引入可学习的补丁取证签名(Patch Forensic Signature),将语义补丁嵌入映射到紧凑的取证隐空间。进而采用最大均值差异(MMD)量化生成图像与真实图像之间的分布差异。基于理论支撑的分析表明,当生成图像中存在局部取证信号时,逐补丁建模能产生可证明的更大分布差异,从而更可靠地将生成图像与真实图像分离。大量实验证明,MDMF在多个基准数据集上持续优于基线检测器,验证了其通用有效性。项目页面:https://zbox1005.github.io/MDMF-project/
English
Recent generative models can produce images that appear highly realistic, raising challenges in distinguishing real and AI-generated images. Yet existing detectors based on pre-trained feature extractors tend to over-rely on global semantics, limiting sensitivity to the critical micro-defects. In this work, we propose Micro-Defects expose Macro-Fakes (MDMF), a local distribution-aware detection framework that amplifies micro-scale statistical irregularities into macro-level distributional discrepancies. To avoid localized forensic cues being diluted by plain aggregation, we introduce a learnable Patch Forensic Signature that projects semantic patch embeddings into a compact forensic latent space. We then use Maximum Mean Discrepancy (MMD) to quantify distributional discrepancies between generated and real images. Our theory-grounded analysis shows that patch-wise modeling yields provably larger discrepancies when localized forensic signals are present in generated images, enabling more reliable separation from real images. Extensive experiments demonstrate that MDMF consistently outperforms baseline detectors across multiple benchmarks, validating its general effectiveness. Project page: https://zbox1005.github.io/MDMF-project/