基於連續潛在空間能量距離的高效語音語言建模
Efficient Speech Language Modeling via Energy Distance in Continuous Latent Space
May 19, 2025
作者: Zhengrui Ma, Yang Feng, Chenze Shao, Fandong Meng, Jie Zhou, Min Zhang
cs.AI
摘要
我们提出了SLED,一种替代性的语音语言建模方法,该方法通过将语音波形编码为连续潜在表示序列,并利用能量距离目标自回归地建模这些序列。能量距离通过对比模拟样本与目标样本,提供了一种分析分布差距的度量,从而实现了对潜在连续自回归分布的有效捕捉。SLED绕过了对残差向量量化的依赖,避免了离散化误差,并消除了现有语音语言模型中常见的复杂层次架构需求。它在简化整体建模流程的同时,保留了语音信息的丰富性,并保持了推理效率。实证结果表明,SLED在零样本和流式语音合成中均展现出强劲性能,显示了其在通用语音语言模型中更广泛应用潜力。
English
We introduce SLED, an alternative approach to speech language modeling by
encoding speech waveforms into sequences of continuous latent representations
and modeling them autoregressively using an energy distance objective. The
energy distance offers an analytical measure of the distributional gap by
contrasting simulated and target samples, enabling efficient training to
capture the underlying continuous autoregressive distribution. By bypassing
reliance on residual vector quantization, SLED avoids discretization errors and
eliminates the need for the complicated hierarchical architectures common in
existing speech language models. It simplifies the overall modeling pipeline
while preserving the richness of speech information and maintaining inference
efficiency. Empirical results demonstrate that SLED achieves strong performance
in both zero-shot and streaming speech synthesis, showing its potential for
broader applications in general-purpose speech language models.Summary
AI-Generated Summary