强制思考反致用户参与型智能体表现内敛:过度思辨对交互效能的负面影响
Thinking Makes LLM Agents Introverted: How Mandatory Thinking Can Backfire in User-Engaged Agents
February 8, 2026
作者: Jiatong Li, Changdae Oh, Hyeong Kyu Choi, Jindong Wang, Sharon Li
cs.AI
摘要
誘導推理已成為提升大型語言模型在複雜任務表現的強大技術,其通過激發模型思考實現性能改進。然而,該技術在真實用戶參與的智能體場景中的有效性尚不明確。本文針對用戶參與型LLM智能體中顯性思考的影響展開系統研究。我們在七種模型、三項基準測試和兩種思考實例化框架下進行實驗,並通過量化的響應分類分析與質化的故障傳播案例研究進行評估。與預期相反,我們發現強制性思考在用戶參與場景中往往對智能體產生負面影響,導致各類LLM出現異常性能衰退。關鍵發現表明:思考會使智能體趨向「內向化」,表現為回應縮減和對用戶的信息披露減少,這削弱了智能體與用戶間的信息交換,進而引發下游任務失敗。此外,我們證實明確提示信息披露能穩定提升不同模型家族的表現,說明主動透明化是優化智能體的關鍵槓桿。總體而言,本研究揭示信息透明意識是現實場景中推理智能體未來設計中至關重要卻尚未充分探索的維度。代碼已開源於:https://github.com/deeplearning-wisc/Thinking-Agent。
English
Eliciting reasoning has emerged as a powerful technique for improving the performance of large language models (LLMs) on complex tasks by inducing thinking. However, their effectiveness in realistic user-engaged agent scenarios remains unclear. In this paper, we conduct a comprehensive study on the effect of explicit thinking in user-engaged LLM agents. Our experiments span across seven models, three benchmarks, and two thinking instantiations, and we evaluate them through both a quantitative response taxonomy analysis and qualitative failure propagation case studies. Contrary to expectations, we find that mandatory thinking often backfires on agents in user-engaged settings, causing anomalous performance degradation across various LLMs. Our key finding reveals that thinking makes agents more ``introverted'' by shortening responses and reducing information disclosure to users, which weakens agent-user information exchange and leads to downstream task failures. Furthermore, we demonstrate that explicitly prompting for information disclosure reliably improves performance across diverse model families, suggesting that proactive transparency is a vital lever for agent optimization. Overall, our study suggests that information transparency awareness is a crucial yet underexplored perspective for the future design of reasoning agents in real-world scenarios. Our code is available at https://github.com/deeplearning-wisc/Thinking-Agent.