ChatPaper.aiChatPaper

你的Transformer其實是線性的

Your Transformer is Secretly Linear

May 19, 2024
作者: Anton Razzhigaev, Matvey Mikhalchuk, Elizaveta Goncharova, Nikolai Gerasimenko, Ivan Oseledets, Denis Dimitrov, Andrey Kuznetsov
cs.AI

摘要

本文揭示了一種新穎的線性特性,僅存在於Transformer解碼器中,包括GPT、LLaMA、OPT、BLOOM等模型。我們分析了連續層之間的嵌入轉換,揭示了一種接近完美的線性關係(普羅克魯斯相似度得分為0.99)。然而,當去除殘差組件時,由於Transformer層的輸出範數一直較低,導致線性度下降。我們的實驗表明,去除或線性逼近一些最線性的Transformer區塊並不會顯著影響損失或模型性能。此外,在我們對較小模型進行的預訓練實驗中,我們引入了基於餘弦相似度的正則化,旨在減少層的線性度。這種正則化改善了像Tiny Stories和SuperGLUE這樣的基準測試中的性能指標,同時成功降低了模型的線性度。這項研究挑戰了對Transformer架構的現有理解,暗示它們的運作可能比先前假設的更線性。
English
This paper reveals a novel linear characteristic exclusive to transformer decoders, including models such as GPT, LLaMA, OPT, BLOOM and others. We analyze embedding transformations between sequential layers, uncovering a near-perfect linear relationship (Procrustes similarity score of 0.99). However, linearity decreases when the residual component is removed due to a consistently low output norm of the transformer layer. Our experiments show that removing or linearly approximating some of the most linear blocks of transformers does not affect significantly the loss or model performance. Moreover, in our pretraining experiments on smaller models we introduce a cosine-similarity-based regularization, aimed at reducing layer linearity. This regularization improves performance metrics on benchmarks like Tiny Stories and SuperGLUE and as well successfully decreases the linearity of the models. This study challenges the existing understanding of transformer architectures, suggesting that their operation may be more linear than previously assumed.

Summary

AI-Generated Summary

PDF15920December 15, 2024