HierSpeech++:通过分层变分推断桥接语音的语义和声学表示,实现零样本语音合成。
HierSpeech++: Bridging the Gap between Semantic and Acoustic Representation of Speech by Hierarchical Variational Inference for Zero-shot Speech Synthesis
November 21, 2023
作者: Sang-Hoon Lee, Ha-Yeong Choi, Seung-Bin Kim, Seong-Whan Lee
cs.AI
摘要
基于大型语言模型(LLM)的语音合成已被广泛应用于零样本语音合成。然而,它们需要大规模数据,并且具有与先前自回归语音模型相同的局限性,包括推理速度慢和缺乏鲁棒性。本文提出了HierSpeech++,一种快速而强大的零样本语音合成器,用于文本转语音(TTS)和语音转换(VC)。我们验证了分层语音合成框架可以显著提高合成语音的鲁棒性和表现力。此外,我们在零样本语音合成场景中显著提高了合成语音的自然度和说话者相似性。对于文本转语音,我们采用文本到向量框架,该框架基于文本表示和语调提示生成自监督语音表示和一个基于文本表示和语调提示的F0表示。然后,HierSpeech++从生成的向量、F0和语音提示生成语音。我们进一步引入了一个从16 kHz到48 kHz的高效语音超分辨率框架。实验结果表明,分层变分自动编码器可以成为一个强大的零样本语音合成器,因为它优于基于LLM和扩散的模型。此外,我们实现了首个人类水平质量的零样本语音合成。音频样本和源代码可在https://github.com/sh-lee-prml/HierSpeechpp 上找到。
English
Large language models (LLM)-based speech synthesis has been widely adopted in
zero-shot speech synthesis. However, they require a large-scale data and
possess the same limitations as previous autoregressive speech models,
including slow inference speed and lack of robustness. This paper proposes
HierSpeech++, a fast and strong zero-shot speech synthesizer for text-to-speech
(TTS) and voice conversion (VC). We verified that hierarchical speech synthesis
frameworks could significantly improve the robustness and expressiveness of the
synthetic speech. Furthermore, we significantly improve the naturalness and
speaker similarity of synthetic speech even in zero-shot speech synthesis
scenarios. For text-to-speech, we adopt the text-to-vec framework, which
generates a self-supervised speech representation and an F0 representation
based on text representations and prosody prompts. Then, HierSpeech++ generates
speech from the generated vector, F0, and voice prompt. We further introduce a
high-efficient speech super-resolution framework from 16 kHz to 48 kHz. The
experimental results demonstrated that the hierarchical variational autoencoder
could be a strong zero-shot speech synthesizer given that it outperforms
LLM-based and diffusion-based models. Moreover, we achieved the first
human-level quality zero-shot speech synthesis. Audio samples and source code
are available at https://github.com/sh-lee-prml/HierSpeechpp.