ChatPaper.aiChatPaper

POWSM:一個語音開放式耳語風格語音基礎模型

POWSM: A Phonetic Open Whisper-Style Speech Foundation Model

October 28, 2025
作者: Chin-Jou Li, Kalvin Chang, Shikhar Bharadwaj, Eunjung Yeo, Kwanghee Choi, Jian Zhu, David Mortensen, Shinji Watanabe
cs.AI

摘要

近期口語處理技術的顯著進展,已在自動語音識別(ASR)、音素識別(PR)、字形轉音素(G2P)及音素轉字形(P2G)等語音任務中取得重大突破。儘管這些任務在概念上具有相似性,但過往研究多孤立進行,各自依賴特定任務的架構與數據集。本文提出POWSM(語音開放式耳語風格模型),首創能協同執行多項音素相關任務的統一框架。該框架實現了音頻、文本(字形)與音素間的無縫轉換,為通用型與低資源語音處理開拓了新可能。我們的模型在保持相似參數量級(Wav2Vec2Phoneme與ZIPA)的同時,其音素識別性能超越或比肩專用模型,並可同步支援G2P、P2G及ASR任務。為推動開放科學,我們已公開訓練數據、程式碼與模型。
English
Recent advances in spoken language processing have led to substantial progress in phonetic tasks such as automatic speech recognition (ASR), phone recognition (PR), grapheme-to-phoneme conversion (G2P), and phoneme-to-grapheme conversion (P2G). Despite their conceptual similarity, these tasks have largely been studied in isolation, each relying on task-specific architectures and datasets. In this paper, we introduce POWSM (Phonetic Open Whisper-style Speech Model), the first unified framework capable of jointly performing multiple phone-related tasks. POWSM enables seamless conversion between audio, text (graphemes), and phones, opening up new possibilities for universal and low-resource speech processing. Our model outperforms or matches specialized PR models of similar size (Wav2Vec2Phoneme and ZIPA) while jointly supporting G2P, P2G, and ASR. Our training data, code and models are released to foster open science.
PDF21December 2, 2025