MERT:具有大规模自监督训练的声学音乐理解模型
MERT: Acoustic Music Understanding Model with Large-Scale Self-supervised Training
May 31, 2023
作者: Yizhi Li, Ruibin Yuan, Ge Zhang, Yinghao Ma, Xingran Chen, Hanzhi Yin, Chenghua Lin, Anton Ragni, Emmanouil Benetos, Norbert Gyenge, Roger Dannenberg, Ruibo Liu, Wenhu Chen, Gus Xia, Yemin Shi, Wenhao Huang, Yike Guo, Jie Fu
cs.AI
摘要
自监督学习(SSL)最近已成为在视觉、文本和语音领域训练通用模型的有前途的范式,其基于大规模数据。虽然SSL在语音和音频方面已被证明有效,但其在音乐音频中的应用尚未得到充分探讨。这主要是由于建模音乐知识所面临的独特挑战,特别是音乐的音调和音高特征。为了填补这一研究空白,我们提出了一个具有大规模自监督训练的声学音乐理解模型(MERT),该模型融入了教师模型,以在掩码语言建模(MLM)风格的声学预训练中提供伪标签。在我们的探索中,我们确定了一个优越的教师模型组合,其在性能方面优于传统的语音和音频方法。这个组合包括基于残差矢量量化 - 变分自动编码器(RVQ-VAE)的声学教师和基于常量Q变换(CQT)的音乐教师。这些教师有效地指导我们的学生模型,即一种类似BERT风格的变压器编码器,以更好地建模音乐音频。此外,我们引入了批内噪声混合增强以增强表示的稳健性。此外,我们探索了各种设置以克服声学语言模型预训练中的不稳定性,这使我们设计的范式能够从95M扩展到330M参数。实验结果表明,我们的模型可以在14个音乐理解任务上实现泛化并表现良好,并获得了最先进的整体得分。代码和模型可在以下网址找到:https://github.com/yizhilll/MERT。
English
Self-supervised learning (SSL) has recently emerged as a promising paradigm
for training generalisable models on large-scale data in the fields of vision,
text, and speech. Although SSL has been proven effective in speech and audio,
its application to music audio has yet to be thoroughly explored. This is
primarily due to the distinctive challenges associated with modelling musical
knowledge, particularly its tonal and pitched characteristics of music. To
address this research gap, we propose an acoustic Music undERstanding model
with large-scale self-supervised Training (MERT), which incorporates teacher
models to provide pseudo labels in the masked language modelling (MLM) style
acoustic pre-training. In our exploration, we identified a superior combination
of teacher models, which outperforms conventional speech and audio approaches
in terms of performance. This combination includes an acoustic teacher based on
Residual Vector Quantization - Variational AutoEncoder (RVQ-VAE) and a musical
teacher based on the Constant-Q Transform (CQT). These teachers effectively
guide our student model, a BERT-style transformer encoder, to better model
music audio. In addition, we introduce an in-batch noise mixture augmentation
to enhance the representation robustness. Furthermore, we explore a wide range
of settings to overcome the instability in acoustic language model
pre-training, which allows our designed paradigm to scale from 95M to 330M
parameters. Experimental results indicate that our model can generalise and
perform well on 14 music understanding tasks and attains state-of-the-art
(SOTA) overall scores. The code and models are online:
https://github.com/yizhilll/MERT.