ChatPaper.aiChatPaper

走向交互式口述

Toward Interactive Dictation

July 8, 2023
作者: Belinda Z. Li, Jason Eisner, Adam Pauls, Sam Thomson
cs.AI

摘要

语音输入越来越重要。现有系统允许用户进行语音输入和语音编辑,但其命令语言受限于由触发词调用的平面模板。本研究探讨了允许用户在自然语言中以口头编辑命令中断其语音输入的可行性。我们引入了一个新任务和数据集TERTiUS,用于研究此类系统。为了实时支持这种灵活性,系统必须逐步分割和分类语音片段,确定其是语音输入还是命令,并解释那些命令片段。我们尝试使用大型预训练语言模型来预测编辑后的文本,或者预测一个小型文本编辑程序。实验表明,在模型准确性和延迟之间存在自然的权衡:较小模型在1.3秒的延迟下实现30%的最终准确率,而较大模型在7秒的延迟下实现55%的最终准确率。
English
Voice dictation is an increasingly important text input modality. Existing systems that allow both dictation and editing-by-voice restrict their command language to flat templates invoked by trigger words. In this work, we study the feasibility of allowing users to interrupt their dictation with spoken editing commands in open-ended natural language. We introduce a new task and dataset, TERTiUS, to experiment with such systems. To support this flexibility in real-time, a system must incrementally segment and classify spans of speech as either dictation or command, and interpret the spans that are commands. We experiment with using large pre-trained language models to predict the edited text, or alternatively, to predict a small text-editing program. Experiments show a natural trade-off between model accuracy and latency: a smaller model achieves 30% end-state accuracy with 1.3 seconds of latency, while a larger model achieves 55% end-state accuracy with 7 seconds of latency.
PDF40December 15, 2024