ChatPaper.aiChatPaper

通用音频表示的自然语言监督

Natural Language Supervision for General-Purpose Audio Representations

September 11, 2023
作者: Benjamin Elizalde, Soham Deshmukh, Huaming Wang
cs.AI

摘要

音频-语言模型共同学习多模态文本和音频表示,实现零样本推理。模型依赖编码器创建强大的输入表示,并泛化到涵盖声音、音乐和语音等多个任务。尽管模型取得了显著的性能,但仍存在与特定任务模型之间的性能差距。本文提出了一种对比语言-音频预训练模型,该模型使用两个创新的编码器对包含460万音频-文本对的多样化集合进行预训练,实现零样本推理。为了学习音频表示,我们在22个音频任务上训练了一个音频编码器,而不是进行标准的声音事件分类训练。为了学习语言表示,我们训练了一个仅自回归解码器模型,而不是标准的仅编码器模型。然后,通过对比学习将音频和语言表示带入联合多模态空间。我们利用我们的编码器在下游任务中显著提高了性能。我们对我们的表示在26个下游任务上进行了广泛评估,这是文献中最大的评估。我们的模型在几个任务中取得了最先进的结果,引领通向通用音频表示的道路。
English
Audio-Language models jointly learn multimodal text and audio representations that enable Zero-Shot inference. Models rely on the encoders to create powerful representations of the input and generalize to multiple tasks ranging from sounds, music, and speech. Although models have achieved remarkable performance, there is still a performance gap with task-specific models. In this paper, we propose a Contrastive Language-Audio Pretraining model that is pretrained with a diverse collection of 4.6M audio-text pairs employing two innovative encoders for Zero-Shot inference. To learn audio representations, we trained an audio encoder on 22 audio tasks, instead of the standard training of sound event classification. To learn language representations, we trained an autoregressive decoder-only model instead of the standard encoder-only models. Then, the audio and language representations are brought into a joint multimodal space using Contrastive Learning. We used our encoders to improve the downstream performance by a margin. We extensively evaluated the generalization of our representations on 26 downstream tasks, the largest in the literature. Our model achieves state of the art results in several tasks leading the way towards general-purpose audio representations.
PDF90December 15, 2024