ChatPaper.aiChatPaper

微觀缺陷揭露宏觀偽造:透過局部分布偏移檢測AI生成圖像

Micro-Defects Expose Macro-Fakes: Detecting AI-Generated Images via Local Distributional Shifts

May 10, 2026
作者: Boxuan Zhang, Jianing Zhu, Qifan Wang, Jiang Liu, Ruixiang Tang
cs.AI

摘要

近期生成的模型能够產出看似高度逼真的影像,引發了區分真實與AI生成影像的挑戰。然而,現有基於預訓練特徵提取器的檢測機制往往過度依賴全局語義,限制了對關鍵微缺陷的敏感性。在本研究中,我們提出微缺陷揭露巨假象(MDMF),這是一個局部分佈感知的檢測框架,能將微觀尺度的統計異常放大為宏觀層級的分佈差異。為避免局部取證線索因單純聚合而被稀釋,我們引入可學習的補丁取證簽名,將語義補丁嵌入投射至緊湊的取證潛在空間。接著,我們使用最大均值差異(MMD)量化生成影像與真實影像之間的分佈差異。我們基於理論的分析表明,當生成影像中存在局部取證訊號時,補丁級建模能產生可證明的更大差異,進而更可靠地將其與真實影像分離。大量實驗證實,MDMF在多個基準測試中持續優於基線檢測器,驗證了其普遍有效性。專案頁面:https://zbox1005.github.io/MDMF-project/
English
Recent generative models can produce images that appear highly realistic, raising challenges in distinguishing real and AI-generated images. Yet existing detectors based on pre-trained feature extractors tend to over-rely on global semantics, limiting sensitivity to the critical micro-defects. In this work, we propose Micro-Defects expose Macro-Fakes (MDMF), a local distribution-aware detection framework that amplifies micro-scale statistical irregularities into macro-level distributional discrepancies. To avoid localized forensic cues being diluted by plain aggregation, we introduce a learnable Patch Forensic Signature that projects semantic patch embeddings into a compact forensic latent space. We then use Maximum Mean Discrepancy (MMD) to quantify distributional discrepancies between generated and real images. Our theory-grounded analysis shows that patch-wise modeling yields provably larger discrepancies when localized forensic signals are present in generated images, enabling more reliable separation from real images. Extensive experiments demonstrate that MDMF consistently outperforms baseline detectors across multiple benchmarks, validating its general effectiveness. Project page: https://zbox1005.github.io/MDMF-project/
PDF21May 14, 2026