ChatPaper.aiChatPaper

搜素-o1:基于智能搜索增强的大规模推理模型

Search-o1: Agentic Search-Enhanced Large Reasoning Models

January 9, 2025
作者: Xiaoxi Li, Guanting Dong, Jiajie Jin, Yuyao Zhang, Yujia Zhou, Yutao Zhu, Peitian Zhang, Zhicheng Dou
cs.AI

摘要

大型推理模型(如OpenAI-o1)通过大规模强化学习展现出卓越的多步骤推理能力,但其冗长的推理过程常因知识储备不足而产生认知不确定性及潜在错误。为突破这一局限,我们提出Search-o1框架,通过智能检索增强生成机制与文档内推理模块强化LRMs的推理能力。该框架将智能搜索流程融入推理链路,当模型遭遇知识盲点时能动态获取外部知识。针对检索文档的冗长特性,我们设计了独立的文档内推理模块,在信息注入推理链前进行深度分析,有效降低信息噪声并保持推理连贯性。在科学、数学、编程等复杂推理任务及六个开放域QA基准测试上的实验表明,Search-o1具有显著性能优势。该方法提升了LRMs在复杂推理任务中的可信度与适用性,为构建更可靠、通用的智能系统开辟了新路径。代码已开源:https://github.com/sunnynexus/Search-o1。
English
Large reasoning models (LRMs) like OpenAI-o1 have demonstrated impressive long stepwise reasoning capabilities through large-scale reinforcement learning. However, their extended reasoning processes often suffer from knowledge insufficiency, leading to frequent uncertainties and potential errors. To address this limitation, we introduce Search-o1, a framework that enhances LRMs with an agentic retrieval-augmented generation (RAG) mechanism and a Reason-in-Documents module for refining retrieved documents. Search-o1 integrates an agentic search workflow into the reasoning process, enabling dynamic retrieval of external knowledge when LRMs encounter uncertain knowledge points. Additionally, due to the verbose nature of retrieved documents, we design a separate Reason-in-Documents module to deeply analyze the retrieved information before injecting it into the reasoning chain, minimizing noise and preserving coherent reasoning flow. Extensive experiments on complex reasoning tasks in science, mathematics, and coding, as well as six open-domain QA benchmarks, demonstrate the strong performance of Search-o1. This approach enhances the trustworthiness and applicability of LRMs in complex reasoning tasks, paving the way for more reliable and versatile intelligent systems. The code is available at https://github.com/sunnynexus/Search-o1.
PDF1025January 10, 2025