ChatPaper.aiChatPaper

基于句子的语音摘要:任务、数据集和端到端建模与语言模型知识蒸馏

Sentence-wise Speech Summarization: Task, Datasets, and End-to-End Modeling with LM Knowledge Distillation

August 1, 2024
作者: Kohei Matsuura, Takanori Ashihara, Takafumi Moriya, Masato Mimura, Takatomo Kano, Atsunori Ogawa, Marc Delcroix
cs.AI

摘要

本文介绍了一种名为句子级语音摘要(Sen-SSum)的新方法,该方法以逐句方式从口头文档中生成文本摘要。Sen-SSum将自动语音识别(ASR)的实时处理与语音摘要的简洁性相结合。为了探索这种方法,我们提出了两个Sen-SSum的数据集:Mega-SSum和CSJ-SSum。利用这些数据集,我们的研究评估了两种基于Transformer的模型:1)将ASR和强文本摘要模型结合的级联模型,以及2)直接将语音转换为文本摘要的端到端(E2E)模型。虽然端到端模型在开发高效模型方面具有吸引力,但它们的性能不如级联模型。因此,我们提出使用级联模型生成的伪摘要对端到端模型进行知识蒸馏。我们的实验表明,这种提出的知识蒸馏有效地提高了端到端模型在两个数据集上的性能。
English
This paper introduces a novel approach called sentence-wise speech summarization (Sen-SSum), which generates text summaries from a spoken document in a sentence-by-sentence manner. Sen-SSum combines the real-time processing of automatic speech recognition (ASR) with the conciseness of speech summarization. To explore this approach, we present two datasets for Sen-SSum: Mega-SSum and CSJ-SSum. Using these datasets, our study evaluates two types of Transformer-based models: 1) cascade models that combine ASR and strong text summarization models, and 2) end-to-end (E2E) models that directly convert speech into a text summary. While E2E models are appealing to develop compute-efficient models, they perform worse than cascade models. Therefore, we propose knowledge distillation for E2E models using pseudo-summaries generated by the cascade models. Our experiments show that this proposed knowledge distillation effectively improves the performance of the E2E model on both datasets.

Summary

AI-Generated Summary

PDF62November 28, 2024