离散音频标记:超越综述的深度探索!
Discrete Audio Tokens: More Than a Survey!
June 12, 2025
作者: Pooneh Mousavi, Gallil Maimon, Adel Moumen, Darius Petermann, Jiatong Shi, Haibin Wu, Haici Yang, Anastasia Kuznetsova, Artem Ploujnikov, Ricard Marxer, Bhuvana Ramabhadran, Benjamin Elizalde, Loren Lugosch, Jinyu Li, Cem Subakan, Phil Woodland, Minje Kim, Hung-yi Lee, Shinji Watanabe, Yossi Adi, Mirco Ravanelli
cs.AI
摘要
离散音频标记是一种紧凑的表示形式,旨在保持感知质量、语音内容和说话者特征的同时,实现高效的存储与推理,并在多样化的下游任务中展现竞争力。它们为连续特征提供了实用的替代方案,使得语音和音频能够融入现代大型语言模型(LLMs)。随着基于标记的音频处理兴趣的增长,多种标记化方法相继涌现,已有若干综述回顾了该领域的最新进展。然而,现有研究往往聚焦于特定领域或任务,缺乏跨多种基准的统一比较。本文系统性地回顾并基准测试了离散音频标记器,涵盖语音、音乐及通用音频三大领域。我们提出了一种基于编码器-解码器架构、量化技术、训练范式、流处理能力及应用领域的标记化方法分类体系。我们在重构、下游任务表现及声学语言建模等多个基准上评估了标记器,并通过控制变量实验分析了权衡取舍。我们的发现揭示了关键局限、实际考量及开放挑战,为这一快速演进领域的未来研究提供了洞见与指导。欲了解更多信息,包括主要成果及标记器数据库,请访问我们的网站:https://poonehmousavi.github.io/dates-website/。
English
Discrete audio tokens are compact representations that aim to preserve
perceptual quality, phonetic content, and speaker characteristics while
enabling efficient storage and inference, as well as competitive performance
across diverse downstream tasks.They provide a practical alternative to
continuous features, enabling the integration of speech and audio into modern
large language models (LLMs). As interest in token-based audio processing
grows, various tokenization methods have emerged, and several surveys have
reviewed the latest progress in the field. However, existing studies often
focus on specific domains or tasks and lack a unified comparison across various
benchmarks. This paper presents a systematic review and benchmark of discrete
audio tokenizers, covering three domains: speech, music, and general audio. We
propose a taxonomy of tokenization approaches based on encoder-decoder,
quantization techniques, training paradigm, streamability, and application
domains. We evaluate tokenizers on multiple benchmarks for reconstruction,
downstream performance, and acoustic language modeling, and analyze trade-offs
through controlled ablation studies. Our findings highlight key limitations,
practical considerations, and open challenges, providing insight and guidance
for future research in this rapidly evolving area. For more information,
including our main results and tokenizer database, please refer to our website:
https://poonehmousavi.github.io/dates-website/.