ChatPaper.aiChatPaper

UT5:使用展開式去噪的非自回歸式 T5 預訓練

UT5: Pretraining Non autoregressive T5 with unrolled denoising

November 14, 2023
作者: Mahmoud G. Salem, Jiayu Ye, Chu-Cheng Lin, Frederick Liu
cs.AI

摘要

最近基於Transformer的大型語言模型取得了在自然語言生成方面的重大進展。然而,為了解碼K個標記,自回歸模型需要進行K個順序前向傳遞,這可能成為大型語言模型的性能瓶頸。許多非自回歸(NAR)研究致力於解決這種順序性瓶頸,儘管許多研究已專注於在監督基準測試中的專用架構。在這項研究中,我們通過展開去噪研究了非自回歸T5模型的無監督預訓練,並展示了其在下游生成任務(如SQuAD問題生成和XSum)中的最先進結果。
English
Recent advances in Transformer-based Large Language Models have made great strides in natural language generation. However, to decode K tokens, an autoregressive model needs K sequential forward passes, which may be a performance bottleneck for large language models. Many non-autoregressive (NAR) research are aiming to address this sequentiality bottleneck, albeit many have focused on a dedicated architecture in supervised benchmarks. In this work, we studied unsupervised pretraining for non auto-regressive T5 models via unrolled denoising and shown its SoTA results in downstream generation tasks such as SQuAD question generation and XSum.
PDF80December 15, 2024