ChatPaper.aiChatPaper

SwiReasoning:潛在與顯式思維切換實現帕累托優越的推理大語言模型

SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs

October 6, 2025
作者: Dachuan Shi, Abedelkadir Asi, Keying Li, Xiangchi Yuan, Leyan Pan, Wenke Lee, Wen Xiao
cs.AI

摘要

近期研究表明,大型語言模型(LLMs)不僅能通過顯式的思維鏈步驟進行離散推理,受限於自然語言的邊界,它們還能在潛在空間中進行連續推理,這使得每一步都能承載更豐富的信息,從而提升符號效率。儘管如此,潛在推理仍面臨兩大挑戰,特別是在無需訓練的設定下:1)純粹的潛在推理通過維持多條隱含路徑擴大了搜索分佈,這分散了概率質量,引入了噪聲,並阻礙了向單一高置信度解的收斂,從而影響了準確性;2)即便沒有顯式文本,過度思考現象依然存在,浪費符號並降低效率。為解決這些問題,我們引入了SwiReasoning,這是一個無需訓練的LLM推理框架,具備兩大創新點:1)SwiReasoning根據下一符號分佈的熵趨勢估計的塊級置信度,動態切換於顯式與潛在推理之間,以平衡探索與利用,促進及時收斂。2)通過限制思維塊切換的最大次數,SwiReasoning有效抑制了過度思考,並在不同難度的問題上提升了符號效率。在廣泛使用的數學及STEM基準測試中,SwiReasoning在不同模型家族和規模的推理LLMs上,平均準確率持續提升了1.5%-2.8%。此外,在預算受限的情況下,SwiReasoning將平均符號效率提升了56%-79%,且隨著預算的緊縮,提升幅度更大。
English
Recent work shows that, beyond discrete reasoning through explicit chain-of-thought steps, which are limited by the boundaries of natural languages, large language models (LLMs) can also reason continuously in latent space, allowing richer information per step and thereby improving token efficiency. Despite this promise, latent reasoning still faces two challenges, especially in training-free settings: 1) purely latent reasoning broadens the search distribution by maintaining multiple implicit paths, which diffuses probability mass, introduces noise, and impedes convergence to a single high-confidence solution, thereby hurting accuracy; and 2) overthinking persists even without explicit text, wasting tokens and degrading efficiency. To address these issues, we introduce SwiReasoning, a training-free framework for LLM reasoning which features two key innovations: 1) SwiReasoning dynamically switches between explicit and latent reasoning, guided by block-wise confidence estimated from entropy trends in next-token distributions, to balance exploration and exploitation and promote timely convergence. 2) By limiting the maximum number of thinking-block switches, SwiReasoning curbs overthinking and improves token efficiency across varying problem difficulties. On widely used mathematics and STEM benchmarks, SwiReasoning consistently improves average accuracy by 1.5%-2.8% across reasoning LLMs of different model families and scales. Furthermore, under constrained budgets, SwiReasoning improves average token efficiency by 56%-79%, with larger gains as budgets tighten.
PDF112October 7, 2025