具有大型語言模型融合的多語言和完全非自回歸自動語音識別:一項全面研究
Multilingual and Fully Non-Autoregressive ASR with Large Language Model Fusion: A Comprehensive Study
January 23, 2024
作者: W. Ronny Huang, Cyril Allauzen, Tongzhou Chen, Kilol Gupta, Ke Hu, James Qin, Yu Zhang, Yongqiang Wang, Shuo-Yiin Chang, Tara N. Sainath
cs.AI
摘要
在大型模型時代,解碼的自回歸特性通常導致延遲成為一個重要瓶頸。我們提出了一種非自回歸 LM 融合 ASR 系統,有效地利用加速器硬件的並行能力。我們的方法結合了通用語音模型(USM)和 PaLM 2 語言模型,以每段評分模式,在 FLEURS 和 YouTube 字幕等所有語言上實現了平均相對 WER 改善,分別為 10.8% 和 3.6%。此外,我們的全面消融研究分析了關鍵參數,如 LLM 大小、上下文長度、詞彙大小、融合方法等。例如,我們探討了從 128M 到 340B 參數的 LLM 大小對 ASR 性能的影響。這項研究為影響實際大規模 LM 融合語音識別系統效果的因素提供了寶貴的見解。
English
In the era of large models, the autoregressive nature of decoding often
results in latency serving as a significant bottleneck. We propose a
non-autoregressive LM-fused ASR system that effectively leverages the
parallelization capabilities of accelerator hardware. Our approach combines the
Universal Speech Model (USM) and the PaLM 2 language model in per-segment
scoring mode, achieving an average relative WER improvement across all
languages of 10.8% on FLEURS and 3.6% on YouTube captioning. Furthermore, our
comprehensive ablation study analyzes key parameters such as LLM size, context
length, vocabulary size, fusion methodology. For instance, we explore the
impact of LLM size ranging from 128M to 340B parameters on ASR performance.
This study provides valuable insights into the factors influencing the
effectiveness of practical large-scale LM-fused speech recognition systems.