我们何时不需要更大的视觉模型?
When Do We Not Need Larger Vision Models?
March 19, 2024
作者: Baifeng Shi, Ziyang Wu, Maolin Mao, Xin Wang, Trevor Darrell
cs.AI
摘要
将视觉模型的规模扩大已成为获得更强大视觉表示的事实标准。在这项工作中,我们讨论了更大视觉模型不再必要的临界点。首先,我们展示了尺度缩放(S^2)的威力,即通过对多个图像尺度运行经过预训练和冻结的较小视觉模型(例如ViT-B或ViT-L),可以在分类、分割、深度估计、多模态LLM(MLLM)基准和机器人操作等方面胜过较大模型(例如ViT-H或ViT-G)。值得注意的是,S^2在V*基准上实现了对MLLM详细理解的最新性能,超越了诸如GPT-4V之类的模型。我们研究了与模型规模扩大相比,S^2作为首选扩展方法的条件。尽管较大模型在处理困难示例时具有更好的泛化能力,但我们展示了较大视觉模型的特征可以很好地由多尺度较小模型的特征近似。这表明,当前大型预训练模型学到的大部分,如果不是全部表示,也可以从多尺度较小模型中获得。我们的结果显示,多尺度较小模型具有与较大模型相当的学习能力,并且使用S^2对较小模型进行预训练可以达到或甚至超过较大模型的优势。我们发布了一个Python软件包,可以通过一行代码将S^2应用于任何视觉模型:
https://github.com/bfshi/scaling_on_scales.
English
Scaling up the size of vision models has been the de facto standard to obtain
more powerful visual representations. In this work, we discuss the point beyond
which larger vision models are not necessary. First, we demonstrate the power
of Scaling on Scales (S^2), whereby a pre-trained and frozen smaller vision
model (e.g., ViT-B or ViT-L), run over multiple image scales, can outperform
larger models (e.g., ViT-H or ViT-G) on classification, segmentation, depth
estimation, Multimodal LLM (MLLM) benchmarks, and robotic manipulation.
Notably, S^2 achieves state-of-the-art performance in detailed understanding
of MLLM on the V* benchmark, surpassing models such as GPT-4V. We examine the
conditions under which S^2 is a preferred scaling approach compared to
scaling on model size. While larger models have the advantage of better
generalization on hard examples, we show that features of larger vision models
can be well approximated by those of multi-scale smaller models. This suggests
most, if not all, of the representations learned by current large pre-trained
models can also be obtained from multi-scale smaller models. Our results show
that a multi-scale smaller model has comparable learning capacity to a larger
model, and pre-training smaller models with S^2 can match or even exceed the
advantage of larger models. We release a Python package that can apply S^2 on
any vision model with one line of code:
https://github.com/bfshi/scaling_on_scales.Summary
AI-Generated Summary