ChatPaper.aiChatPaper

AudioSAE:基于稀疏自编码器的音频处理模型解析

AudioSAE: Towards Understanding of Audio-Processing Models with Sparse AutoEncoders

February 4, 2026
作者: Georgii Aparin, Tasnima Sadekova, Alexey Rukhovich, Assel Yermekova, Laida Kushnareva, Vadim Popov, Kristian Kuznetsov, Irina Piontkovskaya
cs.AI

摘要

稀疏自编码器(SAE)是解析神经表征的强大工具,但其在音频领域的应用仍待深入探索。我们在Whisper和HuBERT的所有编码器层上训练SAE,对其稳定性、可解释性进行了全面评估,并展示了实际应用价值。超过50%的特征在不同随机种子下保持稳定,且重建质量得以保持。SAE特征不仅能捕捉通用声学与语义信息,还能识别特定事件(包括环境噪声和副语言声音如笑声、耳语),并实现有效解耦——仅需移除19-27%的特征即可消除特定概念。通过特征调控,Whisper的虚假语音检测错误率降低70%,而词错误率仅微幅上升,证明了其实际应用潜力。最后,我们发现SAE特征与人类语音感知过程中的脑电图活动存在相关性,表明其与人类神经处理机制具有一致性。代码与模型检查点已开源:https://github.com/audiosae/audiosae_demo。
English
Sparse Autoencoders (SAEs) are powerful tools for interpreting neural representations, yet their use in audio remains underexplored. We train SAEs across all encoder layers of Whisper and HuBERT, provide an extensive evaluation of their stability, interpretability, and show their practical utility. Over 50% of the features remain consistent across random seeds, and reconstruction quality is preserved. SAE features capture general acoustic and semantic information as well as specific events, including environmental noises and paralinguistic sounds (e.g. laughter, whispering) and disentangle them effectively, requiring removal of only 19-27% of features to erase a concept. Feature steering reduces Whisper's false speech detections by 70% with negligible WER increase, demonstrating real-world applicability. Finally, we find SAE features correlated with human EEG activity during speech perception, indicating alignment with human neural processing. The code and checkpoints are available at https://github.com/audiosae/audiosae_demo.
PDF603March 16, 2026