SE-DiCoW:自注册说话人日志条件化Whisper系统
SE-DiCoW: Self-Enrolled Diarization-Conditioned Whisper
January 27, 2026
作者: Alexander Polok, Dominik Klement, Samuele Cornell, Matthew Wiesner, Jan Černocký, Sanjeev Khudanpur, Lukáš Burget
cs.AI
摘要
在多说话人场景下,说话人归属的自动语音识别(ASR)仍面临重大挑战。现有方法虽能在特定领域微调后表现优异,但鲜有系统能在跨领域数据集上实现良好泛化。我们先前提出的Diarization-Conditioned Whisper(DiCoW)模型利用说话人日志输出作为条件信息,通过极少量微调即展现出强大的多语言与多领域性能。本文针对DiCoW的核心局限——静默-目标-非目标-重叠(STNO)掩码的模糊性问题展开研究:当两个或多个说话人完全重叠时,即便其转写内容不同,模型接收的条件信息仍可能近乎相同。我们提出SE-DiCoW(自注册型日志条件Whisper),通过说话人日志定位目标说话人最活跃的对话片段作为注册段,并采用跨注意力机制将注册段特征作为固定条件注入每个编码层。此外,我们通过改进数据分割、模型初始化及数据增强策略进一步优化DiCoW。综合这些创新,SE-DiCoW在EMMA MT-ASR基准测试中相比原始DiCoW将宏平均tcpWER显著降低了52.4%。
English
Speaker-attributed automatic speech recognition (ASR) in multi-speaker environments remains a major challenge. While some approaches achieve strong performance when fine-tuned on specific domains, few systems generalize well across out-of-domain datasets. Our prior work, Diarization-Conditioned Whisper (DiCoW), leverages speaker diarization outputs as conditioning information and, with minimal fine-tuning, demonstrated strong multilingual and multi-domain performance. In this paper, we address a key limitation of DiCoW: ambiguity in Silence-Target-Non-target-Overlap (STNO) masks, where two or more fully overlapping speakers may have nearly identical conditioning despite differing transcriptions. We introduce SE-DiCoW (Self-Enrolled Diarization-Conditioned Whisper), which uses diarization output to locate an enrollment segment anywhere in the conversation where the target speaker is most active. This enrollment segment is used as fixed conditioning via cross-attention at each encoder layer. We further refine DiCoW with improved data segmentation, model initialization, and augmentation. Together, these advances yield substantial gains: SE-DiCoW reduces macro-averaged tcpWER by 52.4% relative to the original DiCoW on the EMMA MT-ASR benchmark.