FLAP:快速語言音頻預訓練
FLAP: Fast Language-Audio Pre-training
November 2, 2023
作者: Ching-Feng Yeh, Po-Yao Huang, Vasu Sharma, Shang-Wen Li, Gargi Gosh
cs.AI
摘要
我們提出了快速語音-語言預訓練(FLAP),這是一種自監督方法,通過遮罩、對比學習和重建有效地學習對齊的語音和語言表示。為了提高效率,FLAP隨機丟棄語音頻譜標記,僅專注於剩餘的標記進行自我監督。通過跨模態對比學習,FLAP學會在共享的潛在空間中對齊成對的語音和文本表示。值得注意的是,FLAP通過遮罩使用多個增強視圖進行跨模態對比,並學會重建語音標記的遮罩部分。此外,FLAP利用大型語言模型(LLMs)來增強文本輸入,有助於提高性能。這些方法導致更加強大和信息豐富的語音-文本表示,使FLAP在AudioCaps(實現53.0%的R@1)和Clotho(實現25.5%的R@1)的語音-文本檢索任務中實現了最先進的性能。
English
We propose Fast Language-Audio Pre-training (FLAP), a self-supervised
approach that efficiently and effectively learns aligned audio and language
representations through masking, contrastive learning and reconstruction. For
efficiency, FLAP randomly drops audio spectrogram tokens, focusing solely on
the remaining ones for self-supervision. Through inter-modal contrastive
learning, FLAP learns to align paired audio and text representations in a
shared latent space. Notably, FLAP leverages multiple augmented views via
masking for inter-modal contrast and learns to reconstruct the masked portion
of audio tokens. Moreover, FLAP leverages large language models (LLMs) to
augment the text inputs, contributing to improved performance. These approaches
lead to more robust and informative audio-text representations, enabling FLAP
to achieve state-of-the-art (SoTA) performance on audio-text retrieval tasks on
AudioCaps (achieving 53.0% R@1) and Clotho (achieving 25.5% R@1).