ChatPaper.aiChatPaper

何時行動,何時等待:面向任務型對話中意圖觸發性的結構軌跡建模

WHEN TO ACT, WHEN TO WAIT: Modeling Structural Trajectories for Intent Triggerability in Task-Oriented Dialogue

June 2, 2025
作者: Yaoyao Qian, Jindan Huang, Yuanli Wang, Simon Yu, Kyrie Zhixuan Zhou, Jiayuan Mao, Mingfu Liang, Hanhan Zhou
cs.AI

摘要

面向任務的對話系統常面臨這樣的困境:當用戶話語看似語義完整,卻缺乏必要的結構化信息以觸發適當的系統行動時,便難以有效應對。這源於用戶往往未能完全理解自身需求,而系統卻需要精確的意圖定義。當前基於大型語言模型(LLM)的代理無法有效區分語言表達的完整性與上下文可觸發性,缺乏協作意圖形成的框架。我們提出了STORM框架,通過UserLLM(擁有完整內部信息訪問權限)與AgentLLM(僅可觀察外部行為)之間的對話,模擬信息不對稱的動態過程。STORM生成標註語料庫,捕捉表達軌跡與潛在認知轉變,從而系統分析協作理解的發展。我們的貢獻包括:(1)形式化對話系統中的信息不對稱處理;(2)建模意圖形成,追蹤協作理解的演進;(3)評估指標,衡量內部認知的提升與任務表現。在四種語言模型上的實驗表明,在某些情境下,適度的不確定性(40-60%)可能優於完全透明,且模型特定的模式提示我們重新思考人機協作中信息完整性的最優程度。這些發現深化了對不對稱推理動態的理解,並為不確定性校準的對話系統設計提供了洞見。
English
Task-oriented dialogue systems often face difficulties when user utterances seem semantically complete but lack necessary structural information for appropriate system action. This arises because users frequently do not fully understand their own needs, while systems require precise intent definitions. Current LLM-based agents cannot effectively distinguish between linguistically complete and contextually triggerable expressions, lacking frameworks for collaborative intent formation. We present STORM, a framework modeling asymmetric information dynamics through conversations between UserLLM (full internal access) and AgentLLM (observable behavior only). STORM produces annotated corpora capturing expression trajectories and latent cognitive transitions, enabling systematic analysis of collaborative understanding development. Our contributions include: (1) formalizing asymmetric information processing in dialogue systems; (2) modeling intent formation tracking collaborative understanding evolution; and (3) evaluation metrics measuring internal cognitive improvements alongside task performance. Experiments across four language models reveal that moderate uncertainty (40-60%) can outperform complete transparency in certain scenarios, with model-specific patterns suggesting reconsideration of optimal information completeness in human-AI collaboration. These findings contribute to understanding asymmetric reasoning dynamics and inform uncertainty-calibrated dialogue system design.
PDF62June 3, 2025