完成胜于完美:通过结构化多轮分解实现高效推理
Done Is Better than Perfect: Unlocking Efficient Reasoning by Structured Multi-Turn Decomposition
May 26, 2025
作者: Zihao Zeng, Xuyao Huang, Boxiu Li, Hao Zhang, Zhijie Deng
cs.AI
摘要
大型推理模型(LRMs)因生成最终答案所需的思维链(CoT)过长而受到批评,导致首词延迟和整体延迟较高。通常,LRMs的CoT混合了多个思维单元;每个单元试图为原始查询生成一个候选答案。因此,提高效率的一个自然思路是减少思维单元的数量。然而,传统CoT中的思维单元无法被明确管理,使得这一目标颇具挑战性。本文引入了多轮分解(MinD)方法,将传统CoT解码为一系列明确、结构化且轮次化的交互,以弥合这一差距。在MinD中,模型对查询提供多轮响应,每轮包含一个思维单元并产生相应的答案。后续轮次可以对先前轮次的思维和答案部分进行反思、验证、修正或探索替代方案。这不仅使答案的传递更为迅速,还实现了对迭代推理过程的显式控制(即用户可在任意轮次停止或继续)。我们采用监督微调(SFT)后接强化学习(RL)的范式来实现MinD。首先,通过提示另一个大语言模型(LLM)将LRM的输出重述为多轮格式,然后使用此类数据对LRM进行调优。观察到调优后的模型倾向于消耗比原始模型更多的词元(可能由于多轮格式引入了额外的答案词元),我们主张利用如GRPO等RL算法,优先选择轮次较少且正确的输出。在MATH数据集上使用R1-Distill模型进行训练后,MinD能够在保持MATH-500、AIME24、AMC23和GPQA-Diamond等推理基准上竞争力的同时,实现输出词元使用量和首词时间(TTFT)高达约70%的减少。
English
Large Reasoning Models (LRMs) are criticized for the excessively lengthy
Chain-of-Thought (CoT) to derive the final answer, suffering from high
first-token and overall latency. Typically, the CoT of LRMs mixes multiple
thinking units; each unit attempts to produce a candidate answer to the
original query. Hence, a natural idea to improve efficiency is to reduce the
unit number. Yet, the fact that the thinking units in vanilla CoT cannot be
explicitly managed renders doing so challenging. This paper introduces
Multi-Turn Decomposition (MinD) to decode conventional CoT into a sequence of
explicit, structured, and turn-wise interactions to bridge the gap. In MinD,
the model provides a multi-turn response to the query, where each turn embraces
a thinking unit and yields a corresponding answer. The subsequent turns can
reflect, verify, revise, or explore alternative approaches to both the thinking
and answer parts of earlier ones. This not only makes the answer delivered more
swiftly, but also enables explicit controls over the iterative reasoning
process (i.e., users may halt or continue at any turn). We follow a supervised
fine-tuning (SFT) then reinforcement learning (RL) paradigm to realize MinD. We
first rephrase the outputs of an LRM into multi-turn formats by prompting
another LLM, and then tune the LRM with such data. Observing that the tuned
model tends to consume even more tokens than the original one (probably due to
that the multi-turn formats introduce additional answer tokens), we advocate
leveraging RL algorithms like GRPO to prioritize correct outputs with fewer
turns. Trained on the MATH dataset using R1-Distill models, MinD can achieve up
to ~70% reduction in both output token usage and time to first token (TTFT),
while maintaining competitive performance on reasoning benchmarks such as
MATH-500, AIME24, AMC23, and GPQA-Diamond.Summary
AI-Generated Summary