OPE:通过提纲引导路径探索克服并行思维中的信息饱和问题
OPE: Overcoming Information Saturation in Parallel Thinking via Outline-Guided Path Exploration
February 9, 2026
作者: Qi Guo, Jianing Wang, Deyang Kong, Xiangyu Xi, Jianfei Zhang, Yi Lu, Jingang Wang, Wei Wang, Shikun Zhang, Wei Ye
cs.AI
摘要
平行思维已成为大型推理模型(LRMs)处理复杂问题的新范式。近期研究通过强化学习增强平行思维,旨在解决监督微调方法在计算资源与效能方面的局限。然而现有工作主要聚焦于聚合阶段的优化,对路径探索环节的关注不足。本文从可验证奖励强化学习(RLVR)框架出发,对平行思维的优化进行理论分析,发现探索路径间的互信息瓶颈是制约整体性能的关键因素。为此,我们提出大纲引导的路径探索(OPE)方法,通过在并行路径推理前生成多样化推理大纲来显式划分解空间,从而降低信息冗余并提升路径间信息捕获的多样性。我们采用迭代式强化学习策略实现OPE,分别优化大纲规划与大纲引导推理。在多个高难度数学基准测试上的实验表明,OPE能有效提升不同聚合策略下的推理性能,使LRMs更可靠地发现正确解。
English
Parallel thinking has emerged as a new paradigm for large reasoning models (LRMs) in tackling complex problems. Recent methods leverage Reinforcement Learning (RL) to enhance parallel thinking, aiming to address the limitations in computational resources and effectiveness encountered with supervised fine-tuning. However, most existing studies primarily focus on optimizing the aggregation phase, with limited attention to the path exploration stage. In this paper, we theoretically analyze the optimization of parallel thinking under the Reinforcement Learning with Verifiable Rewards (RLVR) setting, and identify that the mutual information bottleneck among exploration paths fundamentally restricts overall performance. To address this, we propose Outline-Guided Path Exploration (OPE), which explicitly partitions the solution space by generating diverse reasoning outlines prior to parallel path reasoning, thereby reducing information redundancy and improving the diversity of information captured across exploration paths. We implement OPE with an iterative RL strategy that optimizes outline planning and outline-guided reasoning independently. Extensive experiments across multiple challenging mathematical benchmarks demonstrate that OPE effectively improves reasoning performance in different aggregation strategies, enabling LRMs to more reliably discover correct solutions.