"你在做什么?":多步骤处理过程中来自具身化LLM车载助力的即时反馈效应
"What Are You Doing?": Effects of Intermediate Feedback from Agentic LLM In-Car Assistants During Multi-Step Processing
February 17, 2026
作者: Johannes Kirmayr, Raphael Wennmacher, Khanh Huynh, Lukas Stappen, Elisabeth André, Florian Alt
cs.AI
摘要
能够自主执行多步骤任务的具身AI助手为用户体验带来了新的开放性问题:在长时间操作过程中,尤其是在驾驶等注意力敏感场景下,此类系统应如何传达进展与推理逻辑?我们通过一项受控混合方法研究(N=45),对比了分步计划与中间结果反馈、静默运行仅提供最终结果两种模式,探究了基于大语言模型的具身车载助手的反馈时机与信息详略度。采用车载语音助手的双任务范式实验发现,中间反馈能显著提升用户对速度的感知、信任度及用户体验,同时降低任务负荷——这些效果在不同任务复杂度与交互情境下均保持稳定。访谈进一步揭示了用户对自适应方式的偏好:初期通过高透明度建立信任,待系统验证可靠性后逐步精简反馈内容,并根据任务风险与情境动态调整。我们将实证研究转化为具身助手的反馈时机与信息详略度设计要旨,在透明度与效率间实现平衡。
English
Agentic AI assistants that autonomously perform multi-step tasks raise open questions for user experience: how should such systems communicate progress and reasoning during extended operations, especially in attention-critical contexts such as driving? We investigate feedback timing and verbosity from agentic LLM-based in-car assistants through a controlled, mixed-methods study (N=45) comparing planned steps and intermediate results feedback against silent operation with final-only response. Using a dual-task paradigm with an in-car voice assistant, we found that intermediate feedback significantly improved perceived speed, trust, and user experience while reducing task load - effects that held across varying task complexities and interaction contexts. Interviews further revealed user preferences for an adaptive approach: high initial transparency to establish trust, followed by progressively reducing verbosity as systems prove reliable, with adjustments based on task stakes and situational context. We translate our empirical findings into design implications for feedback timing and verbosity in agentic assistants, balancing transparency and efficiency.