骨干网络之战:预训练模型在计算机视觉任务中的大规模比较
Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks
October 30, 2023
作者: Micah Goldblum, Hossein Souri, Renkun Ni, Manli Shu, Viraj Prabhu, Gowthami Somepalli, Prithvijit Chattopadhyay, Mark Ibrahim, Adrien Bardes, Judy Hoffman, Rama Chellappa, Andrew Gordon Wilson, Tom Goldstein
cs.AI
摘要
基于神经网络的计算机视觉系统通常建立在一个骨干结构之上,这个结构可以是预训练的或随机初始化的特征提取器。几年前,首选的默认选项是在ImageNet上训练过的卷积神经网络。然而,最近出现了许多使用不同算法和数据集预训练的骨干结构。虽然这种选择的丰富性提高了各种系统的性能,但从业者很难做出关于选择哪种骨干结构的明智决定。Backbones之战(BoB)通过对一系列预训练模型进行基准测试,包括视觉-语言模型、通过自监督学习训练的模型以及稳定扩散骨干结构,使这种选择变得更加容易,涵盖了从分类到目标检测再到OOD泛化等各种计算机视觉任务。此外,BoB通过对1500多次训练运行进行全面分析,揭示了现有方法的优势和劣势,为研究社区推进计算机视觉提供了有益的方向。尽管视觉Transformer(ViTs)和自监督学习(SSL)越来越受欢迎,我们发现在我们考虑的模型中,以大型训练集监督方式预训练的卷积神经网络在大多数任务中仍表现最佳。此外,在相同架构和相似规模的预训练数据集上进行苹果对苹果比较时,我们发现自监督学习骨干结构具有很高的竞争力,这表明未来的工作应该使用先进的架构和更大的预训练数据集进行自监督学习预训练。我们公开了实验的原始结果以及允许研究人员将他们自己的骨干结构放入考验的代码,链接在这里:https://github.com/hsouri/Battle-of-the-Backbones
English
Neural network based computer vision systems are typically built on a
backbone, a pretrained or randomly initialized feature extractor. Several years
ago, the default option was an ImageNet-trained convolutional neural network.
However, the recent past has seen the emergence of countless backbones
pretrained using various algorithms and datasets. While this abundance of
choice has led to performance increases for a range of systems, it is difficult
for practitioners to make informed decisions about which backbone to choose.
Battle of the Backbones (BoB) makes this choice easier by benchmarking a
diverse suite of pretrained models, including vision-language models, those
trained via self-supervised learning, and the Stable Diffusion backbone, across
a diverse set of computer vision tasks ranging from classification to object
detection to OOD generalization and more. Furthermore, BoB sheds light on
promising directions for the research community to advance computer vision by
illuminating strengths and weakness of existing approaches through a
comprehensive analysis conducted on more than 1500 training runs. While vision
transformers (ViTs) and self-supervised learning (SSL) are increasingly
popular, we find that convolutional neural networks pretrained in a supervised
fashion on large training sets still perform best on most tasks among the
models we consider. Moreover, in apples-to-apples comparisons on the same
architectures and similarly sized pretraining datasets, we find that SSL
backbones are highly competitive, indicating that future works should perform
SSL pretraining with advanced architectures and larger pretraining datasets. We
release the raw results of our experiments along with code that allows
researchers to put their own backbones through the gauntlet here:
https://github.com/hsouri/Battle-of-the-Backbones