ChatPaper.aiChatPaper

大型語言模型重新評分的低秩適應,以提高參數效率的語音識別

Low-rank Adaptation of Large Language Model Rescoring for Parameter-Efficient Speech Recognition

September 26, 2023
作者: Yu Yu, Chao-Han Huck Yang, Jari Kolehmainen, Prashanth G. Shivakumar, Yile Gu, Sungho Ryu, Roger Ren, Qi Luo, Aditya Gourav, I-Fan Chen, Yi-Chieh Liu, Tuan Dinh, Ankur Gandhe, Denis Filimonov, Shalini Ghosh, Andreas Stolcke, Ariya Rastow, Ivan Bulyko
cs.AI

摘要

我們提出了一種基於低秩適應(LoRA)的神經語言建模系統,用於語音識別輸出重評分。儘管像BERT這樣的預訓練語言模型(LMs)在二次重評分中表現出優越性能,但是將預訓練階段的規模擴大和將預訓練模型適應到特定領域的高計算成本限制了它們在重評分中的實際應用。在這裡,我們提出了一種基於低秩分解的方法,用於訓練一個重評分BERT模型並僅使用預訓練參數的一小部分(0.08%)來適應新領域。這些插入的矩陣通過一個具有基於相關性的正則化損失的區分性訓練目標來進行優化。所提出的低秩適應Rescore-BERT(LoRB)架構在LibriSpeech和內部數據集上進行評估,培訓時間減少了5.4到3.6倍。
English
We propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained language models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up the pretraining stage and adapting the pretrained models to specific domains limit their practical use in rescoring. Here we present a method based on low-rank decomposition to train a rescoring BERT model and adapt it to new domains using only a fraction (0.08%) of the pretrained parameters. These inserted matrices are optimized through a discriminative training objective along with a correlation-based regularization loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is evaluated on LibriSpeech and internal datasets with decreased training times by factors between 5.4 and 3.6.
PDF211December 15, 2024