ChatPaper.aiChatPaper

局部性感知视觉变换器

Locality-Attending Vision Transformer

March 5, 2026
作者: Sina Hajimiri, Farzad Beizaee, Fereshteh Shakeri, Christian Desrosiers, Ismail Ben Ayed, Jose Dolz
cs.AI

摘要

视觉变换器通过利用全局自注意力机制捕捉长距离依赖关系,在分类任务中取得了显著成功。然而,这种机制可能会削弱对分割等任务至关重要的细粒度空间细节。本研究旨在提升视觉变换器在完成标准图像级分类训练后的分割性能。具体而言,我们提出了一种简单而有效的附加模块,在保持视觉变换器图像级识别能力的同时,显著提升了分割任务的表现。该方法通过可学习的高斯核对自注意力进行调制,使注意力偏向相邻图像块。我们还优化了图像块表征以学习更佳的位置嵌入。这些改进既能引导标记聚焦局部上下文并确保空间位置具有有意义的表征,又不会削弱模型整合全局信息的能力。实验证明我们的改进具有显著效果:在三个基准数据集上(如ViT Tiny和Base在ADE20K数据集上分别提升超过6%和4%),无需改变训练策略或牺牲分类性能即可实现分割性能的大幅提升。代码已开源:https://github.com/sinahmr/LocAtViT/。
English
Vision transformers have demonstrated remarkable success in classification by leveraging global self-attention to capture long-range dependencies. However, this same mechanism can obscure fine-grained spatial details crucial for tasks such as segmentation. In this work, we seek to enhance segmentation performance of vision transformers after standard image-level classification training. More specifically, we present a simple yet effective add-on that improves performance on segmentation tasks while retaining vision transformers' image-level recognition capabilities. In our approach, we modulate the self-attention with a learnable Gaussian kernel that biases the attention toward neighboring patches. We further refine the patch representations to learn better embeddings at patch positions. These modifications encourage tokens to focus on local surroundings and ensure meaningful representations at spatial positions, while still preserving the model's ability to incorporate global information. Experiments demonstrate the effectiveness of our modifications, evidenced by substantial segmentation gains on three benchmarks (e.g., over 6% and 4% on ADE20K for ViT Tiny and Base), without changing the training regime or sacrificing classification performance. The code is available at https://github.com/sinahmr/LocAtViT/.
PDF62March 9, 2026