ChatPaper.aiChatPaper

思維鏈模式:基於自適應認知模式的推理框架

Chain of Mindset: Reasoning with Adaptive Cognitive Modes

February 10, 2026
作者: Tianyi Jiang, Arctanx An, Hengyi Feng, Naixin Zhai, Haodong Li, Xiaomin Yu, Jiahui Liu, Hanwen Du, Shuo Zhang, Zhi Yang, Jie Huang, Yuhua Li, Yongxin Ni, Huacan Wang, Ronghao Chen
cs.AI

摘要

人類解決問題的過程從不是單一思維模式的機械重複——這裡的「思維模式」特指一種獨特的認知處理方式。在處理具體任務時,我們並非依賴單一思維模式,而是將多種思維模式整合於單一解決流程中。然而,現有的大型語言模型推理方法普遍陷入一個誤區:它們在所有步驟中套用相同的固定思維模式,忽略了解決同一問題的不同階段需要根本性差異化的思維模式。這種單一化假設阻礙了模型實現更高層級的智能。為突破此限制,我們提出思維鏈框架——一種免訓練的智能體框架,可實現步驟級自適應思維模式協調。該框架將推理分解為四種功能異構的思維模式:空間思維、聚合思維、發散思維與算法思維。元智能體根據動態演進的推理狀態實時選擇最優思維模式,雙向上下文閘門則通過過濾跨模塊信息流來維持推理效能與效率。在涵蓋數學、代碼生成、科學問答及空間推理的六大挑戰性基準測試中,思維鏈框架均取得最先進性能:在Qwen3-VL-32B-Instruct和Gemini-2.0-Flash模型上分別以4.96%和4.72%的整體準確率優勢超越最強基線模型,同時保持推理效率平衡。相關代碼已公開於https://github.com/QuantaAlpha/chain-of-mindset。
English
Human problem-solving is never the repetition of a single mindset, by which we mean a distinct mode of cognitive processing. When tackling a specific task, we do not rely on a single mindset; instead, we integrate multiple mindsets within the single solution process. However, existing LLM reasoning methods fall into a common trap: they apply the same fixed mindset across all steps, overlooking that different stages of solving the same problem require fundamentally different mindsets. This single-minded assumption prevents models from reaching the next level of intelligence. To address this limitation, we propose Chain of Mindset (CoM), a training-free agentic framework that enables step-level adaptive mindset orchestration. CoM decomposes reasoning into four functionally heterogeneous mindsets: Spatial, Convergent, Divergent, and Algorithmic. A Meta-Agent dynamically selects the optimal mindset based on the evolving reasoning state, while a bidirectional Context Gate filters cross-module information flow to maintain effectiveness and efficiency. Experiments across six challenging benchmarks spanning mathematics, code generation, scientific QA, and spatial reasoning demonstrate that CoM achieves state-of-the-art performance, outperforming the strongest baseline by 4.96\% and 4.72\% in overall accuracy on Qwen3-VL-32B-Instruct and Gemini-2.0-Flash, while balancing reasoning efficiency. Our code is publicly available at https://github.com/QuantaAlpha/chain-of-mindset{https://github.com/QuantaAlpha/chain-of-mindset}.
PDF621February 12, 2026