大規模自回歸圖像模型的可擴展預訓練
Scalable Pre-training of Large Autoregressive Image Models
January 16, 2024
作者: Alaaeldin El-Nouby, Michal Klein, Shuangfei Zhai, Miguel Angel Bautista, Alexander Toshev, Vaishaal Shankar, Joshua M Susskind, Armand Joulin
cs.AI
摘要
本文介紹了AIM,一系列以自回歸目標預先訓練的視覺模型。這些模型受其文本對應物,即大型語言模型(LLMs)的啟發,並展現類似的擴展特性。具體而言,我們強調兩個關鍵發現:(1)視覺特徵的性能隨著模型容量和數據量的增加而提升,(2)目標函數的價值與模型在下游任務上的表現相關。我們通過在20億張圖像上預先訓練了一個70億參數的AIM,使用凍結的主幹在ImageNet-1k上實現了84.0%的準確率,來說明這些發現的實際含義。有趣的是,即使在這個規模下,我們觀察到性能沒有飽和的跡象,這表明AIM可能代表了訓練大規模視覺模型的新前沿。AIM的預訓練類似於LLMs的預訓練,並且不需要任何特定於圖像的策略來穩定大規模訓練。
English
This paper introduces AIM, a collection of vision models pre-trained with an
autoregressive objective. These models are inspired by their textual
counterparts, i.e., Large Language Models (LLMs), and exhibit similar scaling
properties. Specifically, we highlight two key findings: (1) the performance of
the visual features scale with both the model capacity and the quantity of
data, (2) the value of the objective function correlates with the performance
of the model on downstream tasks. We illustrate the practical implication of
these findings by pre-training a 7 billion parameter AIM on 2 billion images,
that achieves 84.0% on ImageNet-1k with a frozen trunk. Interestingly, even at
this scale, we observe no sign of saturation in performance, suggesting that
AIM potentially represents a new frontier for training large-scale vision
models. The pre-training of AIM is similar to the pre-training of LLMs, and
does not require any image-specific strategy to stabilize the training at
scale.