HierSpeech++:通過階層變分推斷縮小語音的語義和聲學表示之間的差距,以實現零樣本語音合成。
HierSpeech++: Bridging the Gap between Semantic and Acoustic Representation of Speech by Hierarchical Variational Inference for Zero-shot Speech Synthesis
November 21, 2023
作者: Sang-Hoon Lee, Ha-Yeong Choi, Seung-Bin Kim, Seong-Whan Lee
cs.AI
摘要
基於大型語言模型(LLM)的語音合成已被廣泛應用於零-shot語音合成。然而,它們需要大規模的數據,並具有與先前的自回歸語音模型相同的限制,包括推理速度慢和缺乏魯棒性。本文提出了HierSpeech++,一種快速而強大的零-shot語音合成器,用於文本轉語音(TTS)和語音轉換(VC)。我們驗證了分層語音合成框架可以顯著提高合成語音的魯棒性和表現力。此外,我們在零-shot語音合成情境中,顯著改善了合成語音的自然度和說話者相似性。對於文本轉語音,我們採用了文本轉向量框架,該框架基於文本表示和韻律提示生成自監督語音表示和F0表示。然後,HierSpeech++從生成的向量、F0和語音提示生成語音。我們進一步引入了一個從16 kHz到48 kHz的高效語音超分辨率框架。實驗結果表明,分層變分自編碼器可以成為一個強大的零-shot語音合成器,因為它優於基於LLM和擴散的模型。此外,我們實現了第一個達到人類水平質量的零-shot語音合成。音頻樣本和源代碼可在https://github.com/sh-lee-prml/HierSpeechpp 上找到。
English
Large language models (LLM)-based speech synthesis has been widely adopted in
zero-shot speech synthesis. However, they require a large-scale data and
possess the same limitations as previous autoregressive speech models,
including slow inference speed and lack of robustness. This paper proposes
HierSpeech++, a fast and strong zero-shot speech synthesizer for text-to-speech
(TTS) and voice conversion (VC). We verified that hierarchical speech synthesis
frameworks could significantly improve the robustness and expressiveness of the
synthetic speech. Furthermore, we significantly improve the naturalness and
speaker similarity of synthetic speech even in zero-shot speech synthesis
scenarios. For text-to-speech, we adopt the text-to-vec framework, which
generates a self-supervised speech representation and an F0 representation
based on text representations and prosody prompts. Then, HierSpeech++ generates
speech from the generated vector, F0, and voice prompt. We further introduce a
high-efficient speech super-resolution framework from 16 kHz to 48 kHz. The
experimental results demonstrated that the hierarchical variational autoencoder
could be a strong zero-shot speech synthesizer given that it outperforms
LLM-based and diffusion-based models. Moreover, we achieved the first
human-level quality zero-shot speech synthesis. Audio samples and source code
are available at https://github.com/sh-lee-prml/HierSpeechpp.