ChatPaper.aiChatPaper

逐层逐模块:双管齐下实现ViT最优OOD探测

Layer by layer, module by module: Choose both for optimal OOD probing of ViT

March 5, 2026
作者: Ambroise Odonnat, Vasilii Feofanov, Laetitia Chapel, Romain Tavenard, Ievgen Redko
cs.AI

摘要

近期研究发现,基础模型的中间层往往能产生比最终层更具判别性的表征。虽然这一现象最初被归因于自回归预训练,但后续在通过监督学习和判别性自监督目标训练的模型中也得到了验证。本文对预训练视觉Transformer中中间层的行为展开了系统性研究。通过在多样化图像分类基准上进行大量线性探针实验,我们发现预训练数据与下游数据之间的分布偏移是导致深层性能下降的主要原因。此外,我们在模块层面进行了细粒度分析,结果表明:对Transformer块输出进行标准探针并非最优方案;在显著分布偏移情况下,前馈网络内部的激活值探针能获得最佳性能,而当分布偏移较弱时,多头自注意力模块的归一化输出则表现最优。
English
Recent studies have observed that intermediate layers of foundation models often yield more discriminative representations than the final layer. While initially attributed to autoregressive pretraining, this phenomenon has also been identified in models trained via supervised and discriminative self-supervised objectives. In this paper, we conduct a comprehensive study to analyze the behavior of intermediate layers in pretrained vision transformers. Through extensive linear probing experiments across a diverse set of image classification benchmarks, we find that distribution shift between pretraining and downstream data is the primary cause of performance degradation in deeper layers. Furthermore, we perform a fine-grained analysis at the module level. Our findings reveal that standard probing of transformer block outputs is suboptimal; instead, probing the activation within the feedforward network yields the best performance under significant distribution shift, whereas the normalized output of the multi-head self-attention module is optimal when the shift is weak.
PDF02May 8, 2026