ChatPaper.aiChatPaper

混合推理:模式切换的思维之道

MixReasoning: Switching Modes to Think

October 7, 2025
作者: Haiquan Lu, Gongfan Fang, Xinyin Ma, Qi Li, Xinchao Wang
cs.AI

摘要

推理模型通过逐步解决问题来提升性能,将问题分解为子问题,并在生成答案前探索长链思维。然而,对每一步都应用扩展推理会引入大量冗余,因为子问题的难度和复杂性差异显著:少数关键步骤真正具有挑战性且对最终答案起决定性作用,而其他许多步骤仅涉及简单的修改或基础计算。因此,一个自然的想法是赋予推理模型自适应应对这种变化的能力,而非对所有步骤采用相同的详细程度。为此,我们提出了MixReasoning框架,该框架能在单一响应中动态调整推理深度。由此产生的思维链便混合了对困难步骤的详细推理与对简单步骤的简洁推断。在GSM8K、MATH-500和AIME上的实验表明,MixReasoning在保持准确性的同时,显著缩短了推理长度并大幅提升了效率。
English
Reasoning models enhance performance by tackling problems in a step-by-step manner, decomposing them into sub-problems and exploring long chains of thought before producing an answer. However, applying extended reasoning to every step introduces substantial redundancy, as sub-problems vary widely in difficulty and complexity: a small number of pivotal steps are genuinely challenging and decisive for the final answer, while many others only involve straightforward revisions or simple computations. Therefore, a natural idea is to endow reasoning models with the ability to adaptively respond to this variation, rather than treating all steps with the same level of elaboration. To this end, we propose MixReasoning, a framework that dynamically adjusts the depth of reasoning within a single response. The resulting chain of thought then becomes a mixture of detailed reasoning on difficult steps and concise inference on simpler ones. Experiments on GSM8K, MATH-500, and AIME show that MixReasoning shortens reasoning length and substantially improves efficiency without compromising accuracy.
PDF202October 8, 2025