ChatPaper.aiChatPaper

MuPT:一种生成式符号音乐预训练变压器

MuPT: A Generative Symbolic Music Pretrained Transformer

April 9, 2024
作者: Xingwei Qu, Yuelin Bai, Yinghao Ma, Ziya Zhou, Ka Man Lo, Jiaheng Liu, Ruibin Yuan, Lejun Min, Xueling Liu, Tianyu Zhang, Xinrun Du, Shuyue Guo, Yiming Liang, Yizhi Li, Shangda Wu, Junting Zhou, Tianyu Zheng, Ziyang Ma, Fengze Han, Wei Xue, Gus Xia, Emmanouil Benetos, Xiang Yue, Chenghua Lin, Xu Tan, Stephen W. Huang, Wenhu Chen, Jie Fu, Ge Zhang
cs.AI

摘要

本文探讨了大型语言模型(LLMs)在音乐预训练中的应用。虽然音乐建模中广泛使用MIDI已被充分确立,但我们的研究结果表明LLMs与ABC记谱更加兼容,与其设计和优势更为契合,从而提升了音乐创作模型的性能。为解决在生成过程中来自不同轨道的不对齐节拍所带来的挑战,我们提出了开发一种同步多轨ABC记谱(SMT-ABC记谱)的方案,旨在保持跨多个音乐轨道的连贯性。我们的贡献包括一系列能够处理高达8192个标记的模型,覆盖了我们训练集中90%的符号音乐数据。此外,我们探讨了符号音乐缩放定律(SMS Law)对模型性能的影响。结果显示了音乐生成领域未来研究的一个有希望的方向,通过我们的开源贡献为社区主导的研究提供了丰富的资源。
English
In this paper, we explore the application of Large Language Models (LLMs) to the pre-training of music. While the prevalent use of MIDI in music modeling is well-established, our findings suggest that LLMs are inherently more compatible with ABC Notation, which aligns more closely with their design and strengths, thereby enhancing the model's performance in musical composition. To address the challenges associated with misaligned measures from different tracks during generation, we propose the development of a Synchronized Multi-Track ABC Notation (SMT-ABC Notation), which aims to preserve coherence across multiple musical tracks. Our contributions include a series of models capable of handling up to 8192 tokens, covering 90\% of the symbolic music data in our training set. Furthermore, we explore the implications of the Symbolic Music Scaling Law (SMS Law) on model performance. The results indicate a promising direction for future research in music generation, offering extensive resources for community-led research through our open-source contributions.

Summary

AI-Generated Summary

PDF160December 15, 2024