ChatPaper.aiChatPaper

Transformer 層作為畫家

Transformer Layers as Painters

July 12, 2024
作者: Qi Sun, Marc Pickett, Aakash Kumar Nain, Llion Jones
cs.AI

摘要

儘管大型語言模型普遍採用Transformer,但其內部運作仍不甚了解。我們旨在更好地理解在預訓練Transformer的各層中刪除或重新組織信息的影響。這種理解既可以更好地利用現有模型,也可以進行架構改進以生成新的變體。我們提出了一系列針對凍結模型的實證研究,顯示預訓練Transformer的較低和最終層與中間層存在差異,但中間層具有令人驚訝的一致性。我們進一步展示,某些問題類別對於跳過層、以不同順序運行層或並行運行層具有韌性。我們的觀察表明,即使是凍結的預訓練模型也可以通過跳過層或並行運行層來優雅地在準確性和延遲之間取得平衡。
English
Despite their nearly universal adoption for large language models, the internal workings of transformers are not well understood. We aim to better understand the impact of removing or reorganizing information throughout the layers of a pretrained transformer. Such an understanding could both yield better usage of existing models as well as to make architectural improvements to produce new variants. We present a series of empirical studies on frozen models that show that the lower and final layers of pretrained transformers differ from middle layers, but that middle layers have a surprising amount of uniformity. We further show that some classes of problems have robustness to skipping layers, running the layers in an order different from how they were trained, or running the layers in parallel. Our observations suggest that even frozen pretrained models may gracefully trade accuracy for latency by skipping layers or running layers in parallel.

Summary

AI-Generated Summary

PDF152November 28, 2024