PRiSM:语音模型中电话实现性能基准测试
PRiSM: Benchmarking Phone Realization in Speech Models
January 20, 2026
作者: Shikhar Bharadwaj, Chin-Jou Li, Yoonjae Kim, Kwanghee Choi, Eunjung Yeo, Ryan Soh-Eun Shim, Hanyu Zhou, Brendon Boldt, Karen Rosero Jacome, Kalvin Chang, Darsh Agrawal, Keer Xu, Chao-Han Huck Yang, Jian Zhu, Shinji Watanabe, David R. Mortensen
cs.AI
摘要
音素识别(PR)作为跨语言语音处理和音系分析的语言无关建模基础接口。尽管音素识别系统的研发已历经长期努力,但现有评估仅衡量表层转写准确率。我们推出PRiSM——首个通过音素识别系统的内在与外在评估来揭示语音感知盲区的开源基准。该基准标准化了基于转写的评估体系,并通过转写与表征探针评估其在临床、教育及多语言场景中的下游效用。研究发现:训练过程中的多语言接触是提升音素识别性能的关键,编码器-CTC模型稳定性最佳,专业音素识别模型仍优于大型音频语言模型。PRiSM开源代码、训练方案及数据集,旨在推动领域构建具有强健音系能力的多语言语音模型:https://github.com/changelinglab/prism。
English
Phone recognition (PR) serves as the atomic interface for language-agnostic modeling for cross-lingual speech processing and phonetic analysis. Despite prolonged efforts in developing PR systems, current evaluations only measure surface-level transcription accuracy. We introduce PRiSM, the first open-source benchmark designed to expose blind spots in phonetic perception through intrinsic and extrinsic evaluation of PR systems. PRiSM standardizes transcription-based evaluation and assesses downstream utility in clinical, educational, and multilingual settings with transcription and representation probes. We find that diverse language exposure during training is key to PR performance, encoder-CTC models are the most stable, and specialized PR models still outperform Large Audio Language Models. PRiSM releases code, recipes, and datasets to move the field toward multilingual speech models with robust phonetic ability: https://github.com/changelinglab/prism.