逐層逐模塊:雙重選擇實現ViT的優異OOD探測效果
Layer by layer, module by module: Choose both for optimal OOD probing of ViT
March 5, 2026
作者: Ambroise Odonnat, Vasilii Feofanov, Laetitia Chapel, Romain Tavenard, Ievgen Redko
cs.AI
摘要
最新研究发现,基础模型的中间层往往能产生比最终层更具判别力的表征。虽然这一现象最初被归因于自回归预训练,但在通过监督式和判别式自监督目标训练的模型中也得到了验证。本文通过系统研究分析预训练视觉Transformer中中间层的行为特性。基于多样化图像分类基准的大规模线性探测实验表明,预训练数据与下游数据之间的分布偏移是导致深层性能下降的主要原因。我们进一步开展了模块级细粒度分析,发现对Transformer块输出进行标准探测并非最优方案:在显著分布偏移情况下,前馈网络内部的激活值探测能获得最佳性能;而当分布偏移较弱时,多头自注意力模块的归一化输出则表现最优。
English
Recent studies have observed that intermediate layers of foundation models often yield more discriminative representations than the final layer. While initially attributed to autoregressive pretraining, this phenomenon has also been identified in models trained via supervised and discriminative self-supervised objectives. In this paper, we conduct a comprehensive study to analyze the behavior of intermediate layers in pretrained vision transformers. Through extensive linear probing experiments across a diverse set of image classification benchmarks, we find that distribution shift between pretraining and downstream data is the primary cause of performance degradation in deeper layers. Furthermore, we perform a fine-grained analysis at the module level. Our findings reveal that standard probing of transformer block outputs is suboptimal; instead, probing the activation within the feedforward network yields the best performance under significant distribution shift, whereas the normalized output of the multi-head self-attention module is optimal when the shift is weak.