ChatPaper.aiChatPaper

E3 TTS:基于简单端到端扩散的文本到语音

E3 TTS: Easy End-to-End Diffusion-based Text to Speech

November 2, 2023
作者: Yuan Gao, Nobuyuki Morioka, Yu Zhang, Nanxin Chen
cs.AI

摘要

我们提出了基于扩散的端到端简易文本转语音(Easy End-to-End Diffusion-based Text to Speech)模型,这是一个基于扩散的简单高效的端到端文本转语音模型。E3 TTS直接接受纯文本作为输入,并通过迭代细化过程生成音频波形。与许多先前的工作不同,E3 TTS不依赖于任何中间表示,如声谱图特征或对齐信息。相反,E3 TTS通过扩散过程对波形的时间结构进行建模。在不依赖额外的条件信息的情况下,E3 TTS可以支持给定音频中的灵活潜在结构。这使得E3 TTS可以轻松适应零-shot任务,如编辑,而无需额外的训练。实验证明,E3 TTS能够生成高保真音频,接近最先进的神经TTS系统的性能。音频样本可在https://e3tts.github.io找到。
English
We propose Easy End-to-End Diffusion-based Text to Speech, a simple and efficient end-to-end text-to-speech model based on diffusion. E3 TTS directly takes plain text as input and generates an audio waveform through an iterative refinement process. Unlike many prior work, E3 TTS does not rely on any intermediate representations like spectrogram features or alignment information. Instead, E3 TTS models the temporal structure of the waveform through the diffusion process. Without relying on additional conditioning information, E3 TTS could support flexible latent structure within the given audio. This enables E3 TTS to be easily adapted for zero-shot tasks such as editing without any additional training. Experiments show that E3 TTS can generate high-fidelity audio, approaching the performance of a state-of-the-art neural TTS system. Audio samples are available at https://e3tts.github.io.
PDF161December 15, 2024