ChatPaper.aiChatPaper

让大语言模型通过自我制动调优摆脱过度思考

Let LLMs Break Free from Overthinking via Self-Braking Tuning

May 20, 2025
作者: Haoran Zhao, Yuchen Yan, Yongliang Shen, Haolei Xu, Wenqi Zhang, Kaitao Song, Jian Shao, Weiming Lu, Jun Xiao, Yueting Zhuang
cs.AI

摘要

大型推理模型(LRMs),如OpenAI o1和DeepSeek-R1,通过生成长链思维显著提升了其推理能力,在多种任务中展现出卓越性能。然而,这种性能提升是以生成过程中冗余推理大幅增加为代价的,导致高计算开销并加剧了过度思考的问题。尽管现有众多方法旨在解决过度思考问题,但它们往往依赖外部干预。本文提出了一种新颖框架——自制动调优(SBT),该框架从允许模型自我调节其推理过程的角度出发,从而消除对外部控制机制的依赖。我们基于标准答案构建了一套过度思考识别指标,并设计了一种系统方法来检测冗余推理。此方法能准确识别推理轨迹中的不必要步骤,并为学习自我调节行为生成训练信号。在此基础上,我们开发了一套完整的自适应推理长度数据构建策略,并引入了一种创新的制动提示机制,使模型能够自然地学习在适当点终止推理。在数学基准测试(AIME、AMC、MATH500、GSM8K)上的实验表明,我们的方法在保持与无约束模型相当准确性的同时,最多可减少60%的令牌消耗。
English
Large reasoning models (LRMs), such as OpenAI o1 and DeepSeek-R1, have significantly enhanced their reasoning capabilities by generating longer chains of thought, demonstrating outstanding performance across a variety of tasks. However, this performance gain comes at the cost of a substantial increase in redundant reasoning during the generation process, leading to high computational overhead and exacerbating the issue of overthinking. Although numerous existing approaches aim to address the problem of overthinking, they often rely on external interventions. In this paper, we propose a novel framework, Self-Braking Tuning (SBT), which tackles overthinking from the perspective of allowing the model to regulate its own reasoning process, thus eliminating the reliance on external control mechanisms. We construct a set of overthinking identification metrics based on standard answers and design a systematic method to detect redundant reasoning. This method accurately identifies unnecessary steps within the reasoning trajectory and generates training signals for learning self-regulation behaviors. Building on this foundation, we develop a complete strategy for constructing data with adaptive reasoning lengths and introduce an innovative braking prompt mechanism that enables the model to naturally learn when to terminate reasoning at an appropriate point. Experiments across mathematical benchmarks (AIME, AMC, MATH500, GSM8K) demonstrate that our method reduces token consumption by up to 60% while maintaining comparable accuracy to unconstrained models.

Summary

AI-Generated Summary

PDF192May 23, 2025