ChatPaper.aiChatPaper

Patch n' Pack:NaViT,一個適用於任何長寬比和解析度的視覺Transformer

Patch n' Pack: NaViT, a Vision Transformer for any Aspect Ratio and Resolution

July 12, 2023
作者: Mostafa Dehghani, Basil Mustafa, Josip Djolonga, Jonathan Heek, Matthias Minderer, Mathilde Caron, Andreas Steiner, Joan Puigcerver, Robert Geirhos, Ibrahim Alabdulmohsin, Avital Oliver, Piotr Padlewski, Alexey Gritsenko, Mario Lučić, Neil Houlsby
cs.AI

摘要

在處理圖像之前將其調整為固定解析度的選擇是普遍存在且明顯不夠優化的,迄今尚未成功挑戰的。然而,諸如視覺Transformer(ViT)之類的模型提供了靈活的基於序列的建模,因此具有不同的輸入序列長度。我們利用這一點,提出了NaViT(Native Resolution ViT),它在訓練期間使用序列打包來處理任意解析度和長寬比的輸入。除了靈活的模型使用外,我們展示了在大規模監督和對比圖像-文本預訓練中的改進訓練效率。NaViT可以有效地轉移到標準任務,如圖像和視頻分類、物體檢測和語義分割,並在韌性和公平性基準上取得了改進的結果。在推斷時,輸入解析度的靈活性可用於平滑地在測試時間的成本和性能之間取得平衡。我們認為NaViT標誌著與大多數計算機視覺模型使用的標準CNN設計的輸入和建模流程有所不同,並代表了ViT的一個有前途的方向。
English
The ubiquitous and demonstrably suboptimal choice of resizing images to a fixed resolution before processing them with computer vision models has not yet been successfully challenged. However, models such as the Vision Transformer (ViT) offer flexible sequence-based modeling, and hence varying input sequence lengths. We take advantage of this with NaViT (Native Resolution ViT) which uses sequence packing during training to process inputs of arbitrary resolutions and aspect ratios. Alongside flexible model usage, we demonstrate improved training efficiency for large-scale supervised and contrastive image-text pretraining. NaViT can be efficiently transferred to standard tasks such as image and video classification, object detection, and semantic segmentation and leads to improved results on robustness and fairness benchmarks. At inference time, the input resolution flexibility can be used to smoothly navigate the test-time cost-performance trade-off. We believe that NaViT marks a departure from the standard, CNN-designed, input and modelling pipeline used by most computer vision models, and represents a promising direction for ViTs.
PDF303December 15, 2024