ChatPaper.aiChatPaper

并行缪斯:面向深度信息探索的智能体并行思维

ParallelMuse: Agentic Parallel Thinking for Deep Information Seeking

October 28, 2025
作者: Baixuan Li, Dingchu Zhang, Jialong Wu, Wenbiao Yin, Zhengwei Tao, Yida Zhao, Liwen Zhang, Haiyang Shen, Runnan Fang, Pengjun Xie, Jingren Zhou, Yong Jiang
cs.AI

摘要

平行思维通过扩展探索广度,与信息搜索(IS)智能体的深度探索形成互补,从而进一步提升问题解决能力。然而传统平行思维在此场景下面临两大挑战:因反复从头展开探索导致的低效性,以及在答案生成过程中难以整合长程推理轨迹——有限的上下文容量阻碍了对推理过程的全面考量。为解决这些问题,我们提出面向深度IS智能体的两阶段范式ParallelMuse。第一阶段"功能化分段展开"将生成序列划分为功能区域,通过不确定性引导的路径复用与分支提升探索效率;第二阶段"压缩式推理聚合"利用推理冗余性,对答案推导相关信息进行无损压缩并合成连贯的最终答案。在多个开源智能体与基准测试上的实验表明,该方法可实现最高62%的性能提升,同时减少10%-30%的探索性令牌消耗。
English
Parallel thinking expands exploration breadth, complementing the deep exploration of information-seeking (IS) agents to further enhance problem-solving capability. However, conventional parallel thinking faces two key challenges in this setting: inefficiency from repeatedly rolling out from scratch, and difficulty in integrating long-horizon reasoning trajectories during answer generation, as limited context capacity prevents full consideration of the reasoning process. To address these issues, we propose ParallelMuse, a two-stage paradigm designed for deep IS agents. The first stage, Functionality-Specified Partial Rollout, partitions generated sequences into functional regions and performs uncertainty-guided path reuse and branching to enhance exploration efficiency. The second stage, Compressed Reasoning Aggregation, exploits reasoning redundancy to losslessly compress information relevant to answer derivation and synthesize a coherent final answer. Experiments across multiple open-source agents and benchmarks demonstrate up to 62% performance improvement with a 10--30% reduction in exploratory token consumption.
PDF202December 1, 2025