「你在做什麼?」:具備自主性的大型語言模型車載助手在多步驟處理過程中提供中繼回饋的影響
"What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing
February 17, 2026
作者: Johannes Kirmayr, Raphael Wennmacher, Khanh Huynh, Lukas Stappen, Elisabeth André, Florian Alt
cs.AI
摘要
能夠自主執行多步驟任務的能動型AI助手為用戶體驗帶來了開放性問題:這類系統在執行長時間操作時應如何傳達進度與推理過程,特別是在駕駛等注意力關鍵的情境下?我們通過一項受控混合方法研究(N=45),比較了「預告步驟與即時結果反饋」和「靜默運行僅提供最終結果」兩種模式,探討基於能動型大語言模型的車載助手在反饋時機與詳略程度上的影響。採用車載語音助手的雙任務範式實驗發現,即時反饋能顯著提升用戶對速度的感知、信任度及使用體驗,同時降低任務負荷——這些效果在不同任務複雜度與互動情境下均穩定存在。訪談進一步揭示用戶傾向適應性策略:初期高透明度以建立信任,待系統證明可靠性後逐步精簡反饋內容,並根據任務風險與情境動態調整。我們將實證結果轉化為能動型助手的反饋時機與詳略度設計要點,在透明度與效率間取得平衡。
English
Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.