ChatPaper.aiChatPaper

深度思考,信心十足

Deep Think with Confidence

August 21, 2025
作者: Yichao Fu, Xuewei Wang, Yuandong Tian, Jiawei Zhao
cs.AI

摘要

大型語言模型(LLMs)在推理任務中展現出巨大潛力,這得益於如自我一致性與多數投票等測試時擴展方法。然而,這種方法往往導致準確性收益遞減及高計算開銷。為應對這些挑戰,我們引入了深度思考與置信度(DeepConf),這是一種簡單而強大的方法,能在測試時同時提升推理效率和性能。DeepConf利用模型內置信號動態過濾生成過程中或生成後的低質量推理軌跡。它無需額外的模型訓練或超參數調優,並能無縫集成到現有的服務框架中。我們在多種推理任務及最新開源模型(包括Qwen 3和GPT-OSS系列)上評估了DeepConf。值得注意的是,在如AIME 2025等具有挑戰性的基準測試中,DeepConf@512實現了高達99.9%的準確率,並相比全並行思考減少了多達84.7%的生成令牌數。
English
Large Language Models (LLMs) have shown great potential in reasoning tasks through test-time scaling methods like self-consistency with majority voting. However, this approach often leads to diminishing returns in accuracy and high computational overhead. To address these challenges, we introduce Deep Think with Confidence (DeepConf), a simple yet powerful method that enhances both reasoning efficiency and performance at test time. DeepConf leverages model-internal confidence signals to dynamically filter out low-quality reasoning traces during or after generation. It requires no additional model training or hyperparameter tuning and can be seamlessly integrated into existing serving frameworks. We evaluate DeepConf across a variety of reasoning tasks and the latest open-source models, including Qwen 3 and GPT-OSS series. Notably, on challenging benchmarks such as AIME 2025, DeepConf@512 achieves up to 99.9% accuracy and reduces generated tokens by up to 84.7% compared to full parallel thinking.
PDF614August 22, 2025