ChatPaper.aiChatPaper

指數級更快的語言建模

Exponentially Faster Language Modelling

November 15, 2023
作者: Peter Belcak, Roger Wattenhofer
cs.AI

摘要

語言模型實際上只需要在個別推論時使用其神經元的指數比例。作為證明,我們提出了FastBERT,這是一種BERT變體,在推論過程中僅使用其神經元的0.3\%,並且表現與類似的BERT模型相當。FastBERT在每個層推論時只選擇了4095個神經元中的12個。這是通過將前饋網路替換為快速前饋網路(FFFs)實現的。儘管目前還沒有真正高效的實現來發揮條件神經執行的全部加速潛力,我們提供了高水平的CPU代碼,實現了比優化基準前饋實現快78倍的加速,以及一個PyTorch實現,提供了比等效批量前饋推論快40倍的加速。我們公開了我們的訓練代碼、基準設置和模型權重。
English
Language models only really need to use an exponential fraction of their neurons for individual inferences. As proof, we present FastBERT, a BERT variant that uses 0.3\% of its neurons during inference while performing on par with similar BERT models. FastBERT selectively engages just 12 out of 4095 neurons for each layer inference. This is achieved by replacing feedforward networks with fast feedforward networks (FFFs). While no truly efficient implementation currently exists to unlock the full acceleration potential of conditional neural execution, we provide high-level CPU code achieving 78x speedup over the optimized baseline feedforward implementation, and a PyTorch implementation delivering 40x speedup over the equivalent batched feedforward inference. We publish our training code, benchmarking setup, and model weights.
PDF11926December 15, 2024