SwiReasoning:在潜在与显式思维间切换,实现帕累托更优的推理大语言模型
SwiReasoning: Switch-Thinking in Latent and Explicit for Pareto-Superior Reasoning LLMs
October 6, 2025
作者: Dachuan Shi, Abedelkadir Asi, Keying Li, Xiangchi Yuan, Leyan Pan, Wenke Lee, Wen Xiao
cs.AI
摘要
近期研究表明,大型语言模型(LLMs)不仅能够通过显式的思维链步骤进行离散推理,这一过程受限于自然语言的边界,还能在潜在空间中实现连续推理,从而在每一步中蕴含更丰富的信息,进而提升令牌效率。尽管这一前景令人鼓舞,潜在推理在无需额外训练的场景下仍面临两大挑战:其一,纯粹的潜在推理通过维持多条隐含路径扩展了搜索分布,导致概率质量分散,引入噪声,并阻碍向单一高置信度解的收敛,从而损害了准确性;其二,即便没有显式文本,过度思考现象依然存在,浪费令牌并降低效率。为解决这些问题,我们提出了SwiReasoning,一个无需训练的LLM推理框架,其核心创新点包括:1)SwiReasoning根据下一令牌分布的熵趋势估计的块级置信度,动态切换显式与潜在推理,以平衡探索与利用,促进及时收敛;2)通过限制思维块切换的最大次数,SwiReasoning有效抑制了过度思考,并在不同难度的问题上提升了令牌效率。在广泛使用的数学及STEM基准测试中,SwiReasoning持续提升了不同模型家族和规模推理LLMs的平均准确率1.5%-2.8%。此外,在预算受限的情况下,SwiReasoning将平均令牌效率提高了56%-79%,且随着预算收紧,提升幅度更大。
English
Recent work shows that, beyond discrete reasoning through explicit
chain-of-thought steps, which are limited by the boundaries of natural
languages, large language models (LLMs) can also reason continuously in latent
space, allowing richer information per step and thereby improving token
efficiency. Despite this promise, latent reasoning still faces two challenges,
especially in training-free settings: 1) purely latent reasoning broadens the
search distribution by maintaining multiple implicit paths, which diffuses
probability mass, introduces noise, and impedes convergence to a single
high-confidence solution, thereby hurting accuracy; and 2) overthinking
persists even without explicit text, wasting tokens and degrading efficiency.
To address these issues, we introduce SwiReasoning, a training-free framework
for LLM reasoning which features two key innovations: 1) SwiReasoning
dynamically switches between explicit and latent reasoning, guided by
block-wise confidence estimated from entropy trends in next-token
distributions, to balance exploration and exploitation and promote timely
convergence. 2) By limiting the maximum number of thinking-block switches,
SwiReasoning curbs overthinking and improves token efficiency across varying
problem difficulties. On widely used mathematics and STEM benchmarks,
SwiReasoning consistently improves average accuracy by 1.5%-2.8% across
reasoning LLMs of different model families and scales. Furthermore, under
constrained budgets, SwiReasoning improves average token efficiency by 56%-79%,
with larger gains as budgets tighten.