深度思考,信心十足
Deep Think with Confidence
August 21, 2025
作者: Yichao Fu, Xuewei Wang, Yuandong Tian, Jiawei Zhao
cs.AI
摘要
大型语言模型(LLMs)在推理任务中展现出巨大潜力,这得益于如自洽多数投票等测试时扩展方法。然而,这种方法往往导致准确率提升有限且计算开销巨大。为应对这些挑战,我们提出了“深度自信思考”(DeepConf),这是一种简单而强大的方法,能在测试时同时提升推理效率和性能。DeepConf利用模型内部置信度信号,在生成过程中或之后动态过滤低质量推理轨迹。该方法无需额外模型训练或超参数调优,并可无缝集成至现有服务框架中。我们在多种推理任务及最新开源模型(包括Qwen 3和GPT-OSS系列)上对DeepConf进行了评估。值得注意的是,在AIME 2025等具有挑战性的基准测试中,DeepConf@512实现了高达99.9%的准确率,并相比全并行思考减少了高达84.7%的生成token数。
English
Large Language Models (LLMs) have shown great potential in reasoning tasks
through test-time scaling methods like self-consistency with majority voting.
However, this approach often leads to diminishing returns in accuracy and high
computational overhead. To address these challenges, we introduce Deep Think
with Confidence (DeepConf), a simple yet powerful method that enhances both
reasoning efficiency and performance at test time. DeepConf leverages
model-internal confidence signals to dynamically filter out low-quality
reasoning traces during or after generation. It requires no additional model
training or hyperparameter tuning and can be seamlessly integrated into
existing serving frameworks. We evaluate DeepConf across a variety of reasoning
tasks and the latest open-source models, including Qwen 3 and GPT-OSS series.
Notably, on challenging benchmarks such as AIME 2025, DeepConf@512 achieves up
to 99.9% accuracy and reduces generated tokens by up to 84.7% compared to full
parallel thinking.