ChatPaper.aiChatPaper

擴展MLP:歸納偏好的故事

Scaling MLPs: A Tale of Inductive Bias

June 23, 2023
作者: Gregor Bachmann, Sotiris Anagnostidis, Thomas Hofmann
cs.AI

摘要

在這份研究中,我們重新審視了深度學習中最基本的組件,即多層感知器(MLP),並研究其在視覺任務上的性能極限。對MLP的實證洞察對多個原因至關重要。首先,鑑於近來流行的「少歸納偏差更好」的論述,這是由於變壓器超越了卷積模型,自然而然地探索這一假設的極限。為此,MLP提供了一個理想的測試平臺,因為它完全不受任何歸納偏差的影響。其次,由於其數學上的簡單性,MLP幾乎是深度學習理論文獻中的主要角色,它作為一個代理人來解釋觀察到的更複雜架構的實證現象。令人驚訝的是,在文獻中很難找到有關MLP的實驗數據,尤其是當與大規模預訓練協議相結合時。實踐與理論之間的這種差異令人擔憂:MLP是否反映了實際模型展示的實證進展?或者理論家需要重新思考MLP作為代理人的角色?我們對這兩個方面提供了見解。我們展示了MLP的性能隨規模的擴大而顯著提高(在CIFAR10上為93%,在CIFAR100上為79%,在TinyImageNet上為69%),突顯了缺乏歸納偏差確實可以得到補償。我們觀察到,MLP忠實地模仿了其現代對應物的行為,然而在學習環境中,一些組件表現出更強或意外的行為。由於其固有的計算效率,大規模預訓練實驗對學術研究人員變得更加可行。我們的所有實驗都在單個GPU上運行。
English
In this work we revisit the most fundamental building block in deep learning, the multi-layer perceptron (MLP), and study the limits of its performance on vision tasks. Empirical insights into MLPs are important for multiple reasons. (1) Given the recent narrative "less inductive bias is better", popularized due to transformers eclipsing convolutional models, it is natural to explore the limits of this hypothesis. To that end, MLPs offer an ideal test bed, being completely free of any inductive bias. (2) MLPs have almost exclusively been the main protagonist in the deep learning theory literature due to their mathematical simplicity, serving as a proxy to explain empirical phenomena observed for more complex architectures. Surprisingly, experimental datapoints for MLPs are very difficult to find in the literature, especially when coupled with large pre-training protocols. This discrepancy between practice and theory is worrying: Do MLPs reflect the empirical advances exhibited by practical models? Or do theorists need to rethink the role of MLPs as a proxy? We provide insights into both these aspects. We show that the performance of MLPs drastically improves with scale (93% on CIFAR10, 79% on CIFAR100, 69% on TinyImageNet), highlighting that lack of inductive bias can indeed be compensated. We observe that MLPs mimic the behaviour of their modern counterparts faithfully, with some components in the learning setting however surprisingly exhibiting stronger or unexpected behaviours. Due to their inherent computational efficiency, large pre-training experiments become more accessible for academic researchers. All of our experiments were run on a single GPU.
PDF150December 15, 2024