ChatPaper.aiChatPaper

通过连续潜在空间中的能量距离实现高效语音语言建模

Efficient Speech Language Modeling via Energy Distance in Continuous Latent Space

May 19, 2025
作者: Zhengrui Ma, Yang Feng, Chenze Shao, Fandong Meng, Jie Zhou, Min Zhang
cs.AI

摘要

我们提出SLED,一种创新的语音语言建模方法,通过将语音波形编码为连续潜在表示序列,并采用能量距离目标进行自回归建模。能量距离通过对比模拟样本与目标样本,提供了一种衡量分布差异的解析方法,从而实现了对底层连续自回归分布的高效捕捉。SLED绕过了对残差矢量量化的依赖,避免了离散化误差,并消除了现有语音语言模型中常见的复杂分层架构需求。它简化了整体建模流程,同时保留了语音信息的丰富性,并保持了推理效率。实证结果表明,SLED在零样本和流式语音合成中均展现出强劲性能,彰显了其在通用语音语言模型中更广泛应用的潜力。
English
We introduce SLED, an alternative approach to speech language modeling by encoding speech waveforms into sequences of continuous latent representations and modeling them autoregressively using an energy distance objective. The energy distance offers an analytical measure of the distributional gap by contrasting simulated and target samples, enabling efficient training to capture the underlying continuous autoregressive distribution. By bypassing reliance on residual vector quantization, SLED avoids discretization errors and eliminates the need for the complicated hierarchical architectures common in existing speech language models. It simplifies the overall modeling pipeline while preserving the richness of speech information and maintaining inference efficiency. Empirical results demonstrate that SLED achieves strong performance in both zero-shot and streaming speech synthesis, showing its potential for broader applications in general-purpose speech language models.

Summary

AI-Generated Summary

PDF71May 21, 2025