DeepPrune:无跨迹冗余的并行扩展
DeepPrune: Parallel Scaling without Inter-trace Redundancy
October 9, 2025
作者: Shangqing Tu, Yaxuan Li, Yushi Bai, Lei Hou, Juanzi Li
cs.AI
摘要
并行扩展已成为提升大语言模型(LLMs)推理能力的一种强大范式,它通过同时生成多条思维链(CoT)轨迹来实现。然而,这种方法因轨迹间冗余而带来了显著的运算效率低下——我们的分析显示,超过80%的并行推理轨迹最终得出相同答案,意味着大量计算资源被浪费。针对这一关键效率瓶颈,我们提出了DeepPrune,一个通过动态剪枝实现高效并行扩展的新框架。我们的方法采用了一个专门训练的评判模型,结合焦点损失和过采样技术,能够从部分推理轨迹中准确预测答案等价性,在等价性预测上实现了0.87的AUROC值,并配合一种在线贪心聚类算法,动态剪除冗余路径,同时保持答案的多样性。在三个具有挑战性的基准测试(AIME 2024、AIME 2025和GPQA)及多种推理模型上的全面评估表明,DeepPrune在大多数情况下相比传统共识采样实现了超过80%的token减少,同时保持了在3个百分点以内的竞争性准确率。我们的工作为高效并行推理设立了新标准,使高性能推理更加高效。我们的代码和数据可在此获取:https://deepprune.github.io/
English
Parallel scaling has emerged as a powerful paradigm to enhance reasoning
capabilities in large language models (LLMs) by generating multiple
Chain-of-Thought (CoT) traces simultaneously. However, this approach introduces
significant computational inefficiency due to inter-trace redundancy -- our
analysis reveals that over 80% of parallel reasoning traces yield identical
final answers, representing substantial wasted computation. To address this
critical efficiency bottleneck, we propose DeepPrune, a novel framework that
enables efficient parallel scaling through dynamic pruning. Our method features
a specialized judge model trained with focal loss and oversampling techniques
to accurately predict answer equivalence from partial reasoning traces which
realizes 0.87 AUROC on equivalence prediction, combined with an online greedy
clustering algorithm that dynamically prunes redundant paths while preserving
answer diversity. Comprehensive evaluations across three challenging benchmarks
(AIME 2024, AIME 2025, and GPQA) and multiple reasoning models demonstrate that
DeepPrune achieves remarkable token reduction by over 80% compared to
conventional consensus sampling on most cases, while maintaining competitive
accuracy within 3 percentage points. Our work establishes a new standard for
efficient parallel reasoning, making high-performance reasoning more efficient.
Our code and data are here: https://deepprune.github.io/