MLC-SLM挑戰賽之BUT系統
BUT System for the MLC-SLM Challenge
June 16, 2025
作者: Alexander Polok, Jiangyu Han, Dominik Klement, Samuele Cornell, Jan Černocký, Lukáš Burget
cs.AI
摘要
本研究提出了一种双说话人自动语音识别(ASR)系统,该系统结合了DiCoW——一种基于Whisper的说话人日志条件变体——与DiariZen,后者是构建于Pyannote之上的说话人日志管道。我们首先在未经任何微调的情况下,评估了这两种系统在跨领域(OOD)多语言场景中的表现。在此情境下,DiariZen持续超越基线Pyannote说话人日志模型,展现了强大的泛化能力。尽管DiCoW仅针对目标说话人ASR在英语数据上进行了微调,其仍保持了稳健的多语言性能,表明编码器的修改保留了Whisper的多语言能力。随后,我们在MLC-SLM挑战赛数据上对DiCoW和DiariZen进行了微调。微调后的DiariZen继续优于微调后的Pyannote基线,而DiCoW则通过领域适应获得了进一步的性能提升。我们的最终系统在MLC-SLM挑战赛任务二中实现了16.75%的微平均tcpWER/CER,并位列第二。最后,我们识别出训练数据中的若干标注不一致问题——如缺失的语音片段和错误的静音标注——这些问题可能阻碍说话人日志的微调。我们提出了简单的缓解策略以应对这些问题,并提升系统的鲁棒性。
English
We present a two-speaker automatic speech recognition (ASR) system that
combines DiCoW -- a diarization-conditioned variant of Whisper -- with
DiariZen, a diarization pipeline built on top of Pyannote. We first evaluate
both systems in out-of-domain (OOD) multilingual scenarios without any
fine-tuning. In this scenario, DiariZen consistently outperforms the baseline
Pyannote diarization model, demonstrating strong generalization. Despite being
fine-tuned on English-only data for target-speaker ASR, DiCoW retains solid
multilingual performance, indicating that encoder modifications preserve
Whisper's multilingual capabilities. We then fine-tune both DiCoW and DiariZen
on the MLC-SLM challenge data. The fine-tuned DiariZen continues to outperform
the fine-tuned Pyannote baseline, while DiCoW sees further gains from domain
adaptation. Our final system achieves a micro-average tcpWER/CER of 16.75% and
ranks second in Task 2 of the MLC-SLM challenge. Lastly, we identify several
labeling inconsistencies in the training data -- such as missing speech
segments and incorrect silence annotations -- which can hinder diarization
fine-tuning. We propose simple mitigation strategies to address these issues
and improve system robustness.