ChatPaper.aiChatPaper

何時我們不需要更大的視覺模型?

When Do We Not Need Larger Vision Models?

March 19, 2024
作者: Baifeng Shi, Ziyang Wu, Maolin Mao, Xin Wang, Trevor Darrell
cs.AI

摘要

將視覺模型的尺寸擴大已成為獲取更強大視覺表示的事實標準。在這項工作中,我們討論了更大的視覺模型不再必要的關鍵點。首先,我們展示了視覺尺度上的擴展(S^2)的威力,即通過運行在多個圖像尺度上的預訓練並凍結的較小視覺模型(例如,ViT-B或ViT-L),可以在分類、分割、深度估計、多模式LLM(MLLM)基準以及機器人操作等方面勝過較大的模型(例如,ViT-H或ViT-G)。值得注意的是,S^2在V*基準的MLLM詳細理解方面實現了最先進的性能,超越了GPT-4V等模型。我們檢驗了在何種條件下S^2是比模型尺寸擴大更受青睞的擴展方法。雖然較大的模型在處理困難示例時具有更好的泛化能力,但我們展示了較大視覺模型的特徵可以很好地由多尺度較小模型的特徵近似。這表明,目前大型預訓練模型學習的大部分,如果不是全部,表示也可以從多尺度較小模型中獲得。我們的結果顯示,多尺度較小模型具有與較大模型相當的學習能力,並且使用S^2預訓練較小模型可以達到或甚至超越較大模型的優勢。我們釋出了一個Python套件,可以通過一行程式碼在任何視覺模型上應用S^2:https://github.com/bfshi/scaling_on_scales。
English
Scaling up the size of vision models has been the de facto standard to obtain more powerful visual representations. In this work, we discuss the point beyond which larger vision models are not necessary. First, we demonstrate the power of Scaling on Scales (S^2), whereby a pre-trained and frozen smaller vision model (e.g., ViT-B or ViT-L), run over multiple image scales, can outperform larger models (e.g., ViT-H or ViT-G) on classification, segmentation, depth estimation, Multimodal LLM (MLLM) benchmarks, and robotic manipulation. Notably, S^2 achieves state-of-the-art performance in detailed understanding of MLLM on the V* benchmark, surpassing models such as GPT-4V. We examine the conditions under which S^2 is a preferred scaling approach compared to scaling on model size. While larger models have the advantage of better generalization on hard examples, we show that features of larger vision models can be well approximated by those of multi-scale smaller models. This suggests most, if not all, of the representations learned by current large pre-trained models can also be obtained from multi-scale smaller models. Our results show that a multi-scale smaller model has comparable learning capacity to a larger model, and pre-training smaller models with S^2 can match or even exceed the advantage of larger models. We release a Python package that can apply S^2 on any vision model with one line of code: https://github.com/bfshi/scaling_on_scales.

Summary

AI-Generated Summary

PDF262December 15, 2024