ViTAR:具有任意分辨率的视觉Transformer
ViTAR: Vision Transformer with Any Resolution
March 27, 2024
作者: Qihang Fan, Quanzeng You, Xiaotian Han, Yongfei Liu, Yunzhe Tao, Huaibo Huang, Ran He, Hongxia Yang
cs.AI
摘要
本文解决了视觉Transformer(ViTs)面临的一个重要挑战:它们在不同图像分辨率下的受限可扩展性问题。通常,ViTs在处理与训练过程中看到的分辨率不同的图像时会出现性能下降。我们的工作引入了两个关键创新来解决这个问题。首先,我们提出了一个用单个Transformer块设计的用于动态分辨率调整的新型模块,旨在实现高效的增量式标记集成。其次,我们在视觉Transformer中引入了模糊位置编码,以实现跨多个分辨率的一致位置感知,从而防止过度拟合到任何单一训练分辨率。我们的最终模型ViTAR(任意分辨率视觉Transformer)展现出令人印象深刻的适应性,在1120x1120分辨率下实现83.3\%的top-1准确率,在4032x4032分辨率下实现80.4\%的准确率,同时降低了计算成本。ViTAR在实例分割、语义分割等下游任务中表现出色,并且可以轻松地与自监督学习技术(如Masked AutoEncoder)结合使用。我们的工作为增强ViTs的分辨率可扩展性提供了一种经济高效的解决方案,为更多多功能且高效的高分辨率图像处理铺平了道路。
English
his paper tackles a significant challenge faced by Vision Transformers
(ViTs): their constrained scalability across different image resolutions.
Typically, ViTs experience a performance decline when processing resolutions
different from those seen during training. Our work introduces two key
innovations to address this issue. Firstly, we propose a novel module for
dynamic resolution adjustment, designed with a single Transformer block,
specifically to achieve highly efficient incremental token integration.
Secondly, we introduce fuzzy positional encoding in the Vision Transformer to
provide consistent positional awareness across multiple resolutions, thereby
preventing overfitting to any single training resolution. Our resulting model,
ViTAR (Vision Transformer with Any Resolution), demonstrates impressive
adaptability, achieving 83.3\% top-1 accuracy at a 1120x1120 resolution and
80.4\% accuracy at a 4032x4032 resolution, all while reducing computational
costs. ViTAR also shows strong performance in downstream tasks such as instance
and semantic segmentation and can easily combined with self-supervised learning
techniques like Masked AutoEncoder. Our work provides a cost-effective solution
for enhancing the resolution scalability of ViTs, paving the way for more
versatile and efficient high-resolution image processing.Summary
AI-Generated Summary