ChatPaper.aiChatPaper

SE-DiCoW:自註冊話者日誌化條件 Whisper 系統

SE-DiCoW: Self-Enrolled Diarization-Conditioned Whisper

January 27, 2026
作者: Alexander Polok, Dominik Klement, Samuele Cornell, Matthew Wiesner, Jan Černocký, Sanjeev Khudanpur, Lukáš Burget
cs.AI

摘要

在多说话人环境中实现说话人归属的自动语音识别(ASR)仍是重大挑战。虽然某些方法在特定领域微调后能实现强劲性能,但鲜有系统能在跨领域数据集上良好泛化。我们先前提出的Diarization-Conditioned Whisper(DiCoW)模型利用说话人日志输出作为条件信息,通过极少量微调即展现出优异的多语言与多领域性能。本文针对DiCoW的核心局限——静默-目标-非目标-重叠(STNO)掩码的模糊性问题展开研究:当两个或多个说话人完全重叠时,尽管转写内容不同,其条件信息可能近乎相同。我们提出SE-DiCoW(自注册说话人日志条件化Whisper)模型,通过说话人日志定位对话中目标说话人最活跃的注册片段,将该片段作为固定条件信息经由交叉注意力机制注入每个编码器层。我们进一步通过改进数据分割、模型初始化和数据增强来优化DiCoW框架。这些创新共同带来显著提升:在EMMA MT-ASR基准测试中,SE-DiCoW相较原始DiCoW将宏平均tcpWER相对降低了52.4%。
English
Speaker-attributed automatic speech recognition (ASR) in multi-speaker environments remains a major challenge. While some approaches achieve strong performance when fine-tuned on specific domains, few systems generalize well across out-of-domain datasets. Our prior work, Diarization-Conditioned Whisper (DiCoW), leverages speaker diarization outputs as conditioning information and, with minimal fine-tuning, demonstrated strong multilingual and multi-domain performance. In this paper, we address a key limitation of DiCoW: ambiguity in Silence-Target-Non-target-Overlap (STNO) masks, where two or more fully overlapping speakers may have nearly identical conditioning despite differing transcriptions. We introduce SE-DiCoW (Self-Enrolled Diarization-Conditioned Whisper), which uses diarization output to locate an enrollment segment anywhere in the conversation where the target speaker is most active. This enrollment segment is used as fixed conditioning via cross-attention at each encoder layer. We further refine DiCoW with improved data segmentation, model initialization, and augmentation. Together, these advances yield substantial gains: SE-DiCoW reduces macro-averaged tcpWER by 52.4% relative to the original DiCoW on the EMMA MT-ASR benchmark.
PDF21January 30, 2026