骨幹之戰:預訓練模型在計算機視覺任務中的大規模比較
Battle of the Backbones: A Large-Scale Comparison of Pretrained Models across Computer Vision Tasks
October 30, 2023
作者: Micah Goldblum, Hossein Souri, Renkun Ni, Manli Shu, Viraj Prabhu, Gowthami Somepalli, Prithvijit Chattopadhyay, Mark Ibrahim, Adrien Bardes, Judy Hoffman, Rama Chellappa, Andrew Gordon Wilson, Tom Goldstein
cs.AI
摘要
基於神經網絡的計算機視覺系統通常建立在一個骨幹上,即預訓練或隨機初始化的特徵提取器。幾年前,默認選項是經過ImageNet訓練的卷積神經網絡。然而,近年來出現了許多使用不同算法和數據集預訓練的骨幹。儘管這種選擇的豐富性提高了各種系統的性能,但對於從業者來說,很難做出關於選擇哪個骨幹的明智決定。骨幹之戰(BoB)通過對一系列預訓練模型進行基準測試,包括視覺語言模型、通過自監督學習訓練的模型以及穩定擴散骨幹,涵蓋從分類到物體檢測再到OOD泛化等多樣的計算機視覺任務,從而使這一選擇變得更加容易。此外,BoB通過對1500多次訓練運行進行的全面分析,為研究界提供了推動計算機視覺發展的有益方向,通過揭示現有方法的優勢和劣勢。儘管視覺Transformer(ViTs)和自監督學習(SSL)越來越受歡迎,但我們發現,在我們考慮的模型中,以大型訓練集監督方式預訓練的卷積神經網絡在大多數任務中仍表現最佳。此外,在相同架構和相似大小的預訓練數據集上進行對照比較時,我們發現自監督學習骨幹具有很高的競爭力,這表明未來的工作應該使用先進的架構和更大的預訓練數據集進行自監督學習。我們公開了實驗的原始結果,以及允許研究人員在此處將他們自己的骨幹進行考驗的代碼:https://github.com/hsouri/Battle-of-the-Backbones
English
Neural network based computer vision systems are typically built on a
backbone, a pretrained or randomly initialized feature extractor. Several years
ago, the default option was an ImageNet-trained convolutional neural network.
However, the recent past has seen the emergence of countless backbones
pretrained using various algorithms and datasets. While this abundance of
choice has led to performance increases for a range of systems, it is difficult
for practitioners to make informed decisions about which backbone to choose.
Battle of the Backbones (BoB) makes this choice easier by benchmarking a
diverse suite of pretrained models, including vision-language models, those
trained via self-supervised learning, and the Stable Diffusion backbone, across
a diverse set of computer vision tasks ranging from classification to object
detection to OOD generalization and more. Furthermore, BoB sheds light on
promising directions for the research community to advance computer vision by
illuminating strengths and weakness of existing approaches through a
comprehensive analysis conducted on more than 1500 training runs. While vision
transformers (ViTs) and self-supervised learning (SSL) are increasingly
popular, we find that convolutional neural networks pretrained in a supervised
fashion on large training sets still perform best on most tasks among the
models we consider. Moreover, in apples-to-apples comparisons on the same
architectures and similarly sized pretraining datasets, we find that SSL
backbones are highly competitive, indicating that future works should perform
SSL pretraining with advanced architectures and larger pretraining datasets. We
release the raw results of our experiments along with code that allows
researchers to put their own backbones through the gauntlet here:
https://github.com/hsouri/Battle-of-the-Backbones