何时继续思考:自适应思维模式切换以实现高效推理
When to Continue Thinking: Adaptive Thinking Mode Switching for Efficient Reasoning
May 21, 2025
作者: Xiaoyun Zhang, Jingqing Ruan, Xing Ma, Yawen Zhu, Haodong Zhao, Hao Li, Jiansong Chen, Ke Zeng, Xunliang Cai
cs.AI
摘要
大型推理模型(LRMs)通过长推理链实现了卓越的性能,但在简单任务上常因冗余推理而产生过高的计算开销。本研究系统量化了LRMs在“长思考”与“无思考”模式下的性能上限,揭示了“内部自我恢复机制”现象,即模型在生成答案时隐式补充推理过程。基于这一发现,我们提出了自适应自我恢复推理(ASRR)框架,该框架抑制不必要的推理,实现隐式恢复。通过引入精度感知的长度奖励调节机制,ASRR根据问题难度自适应分配推理资源,在几乎不牺牲性能的前提下实现高效推理。在多个基准和模型上的实验表明,与GRPO相比,ASRR在1.5B和7B模型上分别减少了高达32.5%和25.7%的推理预算,且准确率损失极小(pass@1分别下降1.2%和0.6%),同时在安全基准上的无害率显著提升(最高提升21.7%)。我们的研究结果凸显了ASRR在实现高效、自适应且更安全的LRMs推理方面的潜力。
English
Large reasoning models (LRMs) achieve remarkable performance via long
reasoning chains, but often incur excessive computational overhead due to
redundant reasoning, especially on simple tasks. In this work, we
systematically quantify the upper bounds of LRMs under both Long-Thinking and
No-Thinking modes, and uncover the phenomenon of "Internal Self-Recovery
Mechanism" where models implicitly supplement reasoning during answer
generation. Building on this insight, we propose Adaptive Self-Recovery
Reasoning (ASRR), a framework that suppresses unnecessary reasoning and enables
implicit recovery. By introducing accuracy-aware length reward regulation, ASRR
adaptively allocates reasoning effort according to problem difficulty,
achieving high efficiency with negligible performance sacrifice. Experiments
across multiple benchmarks and models show that, compared with GRPO, ASRR
reduces reasoning budget by up to 32.5% (1.5B) and 25.7% (7B) with minimal
accuracy loss (1.2% and 0.6% pass@1), and significantly boosts harmless rates
on safety benchmarks (up to +21.7%). Our results highlight the potential of
ASRR for enabling efficient, adaptive, and safer reasoning in LRMs.Summary
AI-Generated Summary