ChatPaper.aiChatPaper

USAD: Universal Speech and Audio Representation via Distillation

June 23, 2025
Authors: Heng-Jui Chang, Saurabhchand Bhati, James Glass, Alexander H. Liu
cs.AI

Abstract

Self-supervised learning (SSL) has revolutionized audio representations, yet models often remain domain-specific, focusing on either speech or non-speech tasks. In this work, we present Universal Speech and Audio Distillation (USAD), a unified approach to audio representation learning that integrates diverse audio types - speech, sound, and music - into a single model. USAD employs efficient layer-to-layer distillation from domain-specific SSL models to train a student on a comprehensive audio dataset. USAD offers competitive performance across various benchmarks and datasets, including frame and instance-level speech processing tasks, audio tagging, and sound classification, achieving near state-of-the-art results with a single encoder on SUPERB and HEAR benchmarks.

PDF91June 25, 2025