ChatPaper.aiChatPaper

朝向互動式口述

Toward Interactive Dictation

July 8, 2023
作者: Belinda Z. Li, Jason Eisner, Adam Pauls, Sam Thomson
cs.AI

摘要

語音口述是一種日益重要的文本輸入模式。現有系統允許使用者進行口述和語音編輯,但其命令語言僅限於由觸發詞語調用的平面模板。在這項研究中,我們探討允許使用者以開放式自然語言中斷其口述並進行口語編輯命令的可行性。我們引入了一個新任務和數據集,TERTiUS,以實驗這樣的系統。為了在實時中支持這種靈活性,系統必須逐步對語音段落進行分段和分類,將其歸類為口述或命令,並解釋那些命令段落。我們嘗試使用大型預訓練語言模型來預測編輯後的文本,或者預測一個小型文本編輯程序。實驗表明,在模型準確性和延遲之間存在自然的權衡:較小的模型實現了30%的最終準確性,延遲為1.3秒,而較大的模型實現了55%的最終準確性,延遲為7秒。
English
Voice dictation is an increasingly important text input modality. Existing systems that allow both dictation and editing-by-voice restrict their command language to flat templates invoked by trigger words. In this work, we study the feasibility of allowing users to interrupt their dictation with spoken editing commands in open-ended natural language. We introduce a new task and dataset, TERTiUS, to experiment with such systems. To support this flexibility in real-time, a system must incrementally segment and classify spans of speech as either dictation or command, and interpret the spans that are commands. We experiment with using large pre-trained language models to predict the edited text, or alternatively, to predict a small text-editing program. Experiments show a natural trade-off between model accuracy and latency: a smaller model achieves 30% end-state accuracy with 1.3 seconds of latency, while a larger model achieves 55% end-state accuracy with 7 seconds of latency.
PDF40December 15, 2024