自監督語音模型的界面設計
Interface Design for Self-Supervised Speech Models
June 18, 2024
作者: Yi-Jen Shih, David Harwath
cs.AI
摘要
最近,自監督語音(SSL)模型已被廣泛應用於許多下游語音處理任務。一般的使用模式是將SSL模型用作特徵提取器,然後訓練一個下游預測頭以解決特定任務。然而,已經顯示SSL模型的不同層捕獲了不同類型的信息,並且尚未深入研究如何結合這些信息。為此,我們通過提出連接上游和下游的界面來擴展SSL模型利用的一般框架。從這個角度來看,通過逐層加權和的主要技術可以被視為一個特定的界面。我們提出了幾種替代界面設計並證明,對許多任務來說,加權和界面並不是最佳選擇。特別是,我們展示了一種卷積界面,其深度與上游模型的深度呈對數比例,始終優於許多其他界面設計。
English
Self-supervised speech (SSL) models have recently become widely adopted for
many downstream speech processing tasks. The general usage pattern is to employ
SSL models as feature extractors, and then train a downstream prediction head
to solve a specific task. However, different layers of SSL models have been
shown to capture different types of information, and the methods of combining
them are not well studied. To this end, we extend the general framework for SSL
model utilization by proposing the interface that connects the upstream and
downstream. Under this view, the dominant technique of combining features via a
layerwise weighted sum can be regarded as a specific interface. We propose
several alternative interface designs and demonstrate that the weighted sum
interface is suboptimal for many tasks. In particular, we show that a
convolutional interface whose depth scales logarithmically with the depth of
the upstream model consistently outperforms many other interface designs.Summary
AI-Generated Summary