ChatPaper.aiChatPaper

停止过度思考:大语言模型高效推理研究综述

Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models

March 20, 2025
作者: Yang Sui, Yu-Neng Chuang, Guanchu Wang, Jiamu Zhang, Tianyi Zhang, Jiayi Yuan, Hongyi Liu, Andrew Wen, Shaochen, Zhong, Hanjie Chen, Xia Hu
cs.AI

摘要

大型语言模型(LLMs)在复杂任务中展现了卓越的能力。近期,大型推理模型(LRMs)如OpenAI o1和DeepSeek-R1的进展,通过利用监督微调(SFT)和强化学习(RL)技术来增强链式思维(CoT)推理,进一步提升了在数学和编程等系统2推理领域的性能。然而,尽管更长的CoT推理序列能提高性能,它们也因冗长冗余的输出引入了显著的计算开销,这一现象被称为“过度思考现象”。本文首次提供了结构化调查,系统性地探讨和研究了当前在实现LLMs高效推理方面的进展。总体而言,基于LLMs的内在机制,我们将现有工作归类为几个关键方向:(1)基于模型的高效推理,考虑将全长度推理模型优化为更简洁的推理模型或直接训练高效推理模型;(2)基于推理输出的高效推理,旨在推理过程中动态减少推理步骤和长度;(3)基于输入提示的高效推理,寻求根据输入提示属性(如难度或长度控制)来提升推理效率。此外,我们还介绍了使用高效数据训练推理模型的方法,探索了小语言模型的推理能力,并讨论了评估方法和基准测试。
English
Large Language Models (LLMs) have demonstrated remarkable capabilities in complex tasks. Recent advancements in Large Reasoning Models (LRMs), such as OpenAI o1 and DeepSeek-R1, have further improved performance in System-2 reasoning domains like mathematics and programming by harnessing supervised fine-tuning (SFT) and reinforcement learning (RL) techniques to enhance the Chain-of-Thought (CoT) reasoning. However, while longer CoT reasoning sequences improve performance, they also introduce significant computational overhead due to verbose and redundant outputs, known as the "overthinking phenomenon". In this paper, we provide the first structured survey to systematically investigate and explore the current progress toward achieving efficient reasoning in LLMs. Overall, relying on the inherent mechanism of LLMs, we categorize existing works into several key directions: (1) model-based efficient reasoning, which considers optimizing full-length reasoning models into more concise reasoning models or directly training efficient reasoning models; (2) reasoning output-based efficient reasoning, which aims to dynamically reduce reasoning steps and length during inference; (3) input prompts-based efficient reasoning, which seeks to enhance reasoning efficiency based on input prompt properties such as difficulty or length control. Additionally, we introduce the use of efficient data for training reasoning models, explore the reasoning capabilities of small language models, and discuss evaluation methods and benchmarking.

Summary

AI-Generated Summary

PDF732March 21, 2025