ChatPaper.aiChatPaper

SHANKS:語音語言模型的同步聽覺與思考

SHANKS: Simultaneous Hearing and Thinking for Spoken Language Models

October 8, 2025
作者: Cheng-Han Chiang, Xiaofei Wang, Linjie Li, Chung-Ching Lin, Kevin Lin, Shujie Liu, Zhendong Wang, Zhengyuan Yang, Hung-yi Lee, Lijuan Wang
cs.AI

摘要

當前的大型語言模型(LLMs)和口語模型(SLMs)僅在用戶完成其輪次後才開始思考並採取行動。這阻礙了模型在用戶發言期間進行互動,並可能導致其等待思考時的高響應延遲。因此,在接收完整輸入後才進行思考的方式並不適合語音對語音交互,因為實時、低延遲的交流至關重要。我們通過觀察到人類自然具備“邊聽邊想”的能力來解決這一問題。本文中,我們提出了SHANKS,這是一個通用的推理框架,使SLMs能夠在聆聽用戶輸入的同時生成未說出口的思維鏈推理。SHANKS將輸入語音以固定時長的片段進行流式處理,一旦接收到一個片段,便基於所有先前的語音和推理生成未說出口的推理,而用戶則繼續發言。SHANKS利用這些未說出口的推理來決定是否打斷用戶以及調用工具來完成任務。我們展示了SHANKS在兩種情境下增強了用戶與SLM的實時互動:(1)當用戶逐步展示數學問題的解決方案時,SHANKS能夠聆聽、推理,並在用戶出錯時打斷,其打斷準確率比不經思考就打斷的基線模型高出37.1%;(2)在工具增強的對話中,SHANKS能在用戶完成其輪次前完成56.9%的工具調用。總體而言,SHANKS推動了模型在整個對話過程中持續思考,而不僅僅是在輪次結束後。SHANKS的動態演示可訪問https://d223302.github.io/SHANKS/查看。
English
Current large language models (LLMs) and spoken language models (SLMs) begin thinking and taking actions only after the user has finished their turn. This prevents the model from interacting during the user's turn and can lead to high response latency while it waits to think. Consequently, thinking after receiving the full input is not suitable for speech-to-speech interaction, where real-time, low-latency exchange is important. We address this by noting that humans naturally "think while listening." In this paper, we propose SHANKS, a general inference framework that enables SLMs to generate unspoken chain-of-thought reasoning while listening to the user input. SHANKS streams the input speech in fixed-duration chunks and, as soon as a chunk is received, generates unspoken reasoning based on all previous speech and reasoning, while the user continues speaking. SHANKS uses this unspoken reasoning to decide whether to interrupt the user and to make tool calls to complete the task. We demonstrate that SHANKS enhances real-time user-SLM interaction in two scenarios: (1) when the user is presenting a step-by-step solution to a math problem, SHANKS can listen, reason, and interrupt when the user makes a mistake, achieving 37.1% higher interruption accuracy than a baseline that interrupts without thinking; and (2) in a tool-augmented dialogue, SHANKS can complete 56.9% of the tool calls before the user finishes their turn. Overall, SHANKS moves toward models that keep thinking throughout the conversation, not only after a turn ends. Animated illustrations of Shanks can be found at https://d223302.github.io/SHANKS/
PDF342October 9, 2025