在不需要對齊的情況下改善語音-文本共同表示
Improving Joint Speech-Text Representations Without Alignment
August 11, 2023
作者: Cal Peyser, Zhong Meng, Ke Hu, Rohit Prabhavalkar, Andrew Rosenberg, Tara N. Sainath, Michael Picheny, Kyunghyun Cho
cs.AI
摘要
過去一年來,在以文本提示的圖像生成方面取得了驚人的進展,其基礎是跨模態表示空間的概念,其中文本和圖像領域共同表示。在自動語音識別(ASR)中,這個想法被應用為聯合語音-文本編碼器,可以通過在未配對的語音和文本上進行訓練來擴展非常大的參數模型的容量。儘管這些方法顯示出潛力,但它們需要對語音和文本之間固有的序列長度不匹配進行特殊處理,通過上採樣啟發法或明確的對齊模型。在這項工作中,我們提供證據表明,聯合語音-文本編碼器通過忽略序列長度自然地實現跨模態一致的表示,並主張一致性損失可以寬恕長度差異,並簡單地假定最佳對齊。我們展示這樣的損失改善了大型單語和多語系統中下游的詞錯率(WER)。
English
The last year has seen astonishing progress in text-prompted image generation
premised on the idea of a cross-modal representation space in which the text
and image domains are represented jointly. In ASR, this idea has found
application as joint speech-text encoders that can scale to the capacities of
very large parameter models by being trained on both unpaired speech and text.
While these methods show promise, they have required special treatment of the
sequence-length mismatch inherent in speech and text, either by up-sampling
heuristics or an explicit alignment model. In this work, we offer evidence that
joint speech-text encoders naturally achieve consistent representations across
modalities by disregarding sequence length, and argue that consistency losses
could forgive length differences and simply assume the best alignment. We show
that such a loss improves downstream WER in both a large-parameter monolingual
and multilingual system.