ChatPaper.aiChatPaper

超越十步:通过大规模异步强化学习解锁长程自主搜索

Beyond Ten Turns: Unlocking Long-Horizon Agentic Search with Large-Scale Asynchronous RL

August 11, 2025
作者: Jiaxuan Gao, Wei Fu, Minyang Xie, Shusheng Xu, Chuyi He, Zhiyu Mei, Banghua Zhu, Yi Wu
cs.AI

摘要

近期,基于大语言模型(LLM)的智能体在整合外部工具处理复杂、知识密集型任务方面展现了显著能力。在众多工具选择中,搜索工具在获取海量外部知识方面扮演着关键角色。然而,开源智能体在实现专家级搜索智能——即解决模糊查询、生成精准搜索、分析结果及进行深入探索的能力上仍有不足。现有方法在可扩展性、效率及数据质量方面存在局限。例如,现有在线强化学习(RL)方法中较小的回合限制(如≤10)制约了复杂策略的学习。本文介绍了ASearcher,一个用于大规模搜索智能体RL训练的开源项目。我们的主要贡献包括:(1)可扩展的全异步RL训练,支持长周期搜索的同时保持高训练效率;(2)基于提示的LLM智能体,自主合成高质量且具挑战性的问答对,构建大规模QA数据集。通过RL训练,我们的基于提示的QwQ-32B智能体在xBench和GAIA上分别实现了46.7%和20.8%的Avg@4提升。值得注意的是,我们的智能体展现了极长的搜索周期,训练期间工具调用超过40回合,输出标记数超过15万。凭借简洁的智能体设计且无需外部LLM,ASearcher-Web-QwQ在xBench和GAIA上的Avg@4得分分别达到42.1和52.8,超越了现有的开源32B智能体。我们在https://github.com/inclusionAI/ASearcher开源了模型、训练数据及代码。
English
Recent advancements in LLM-based agents have demonstrated remarkable capabilities in handling complex, knowledge-intensive tasks by integrating external tools. Among diverse choices of tools, search tools play a pivotal role in accessing vast external knowledge. However, open-source agents still fall short of achieving expert-level Search Intelligence, the ability to resolve ambiguous queries, generate precise searches, analyze results, and conduct thorough exploration. Existing approaches fall short in scalability, efficiency, and data quality. For example, small turn limits in existing online RL methods, e.g. <=10, restrict complex strategy learning. This paper introduces ASearcher, an open-source project for large-scale RL training of search agents. Our key contributions include: (1) Scalable fully asynchronous RL training that enables long-horizon search while maintaining high training efficiency. (2) A prompt-based LLM agent that autonomously synthesizes high-quality and challenging QAs, creating a large-scale QA dataset. Through RL training, our prompt-based QwQ-32B agent achieves substantial improvements, with 46.7% and 20.8% Avg@4 gains on xBench and GAIA, respectively. Notably, our agent exhibits extreme long-horizon search, with tool calls exceeding 40 turns and output tokens exceeding 150k during training time. With a simple agent design and no external LLMs, ASearcher-Web-QwQ achieves Avg@4 scores of 42.1 on xBench and 52.8 on GAIA, surpassing existing open-source 32B agents. We open-source our models, training data, and codes in https://github.com/inclusionAI/ASearcher.
PDF393August 13, 2025