ChatPaper.aiChatPaper

通过结构化组件奖励机制释放科学推理能力 用于生物实验方案生成

Unleashing Scientific Reasoning for Bio-experimental Protocol Generation via Structured Component-based Reward Mechanism

October 17, 2025
作者: Haoran Sun, Yankai Jiang, Zhenyu Tang, Yaning Pan, Shuang Gu, Zekai Lin, Lilong Wang, Wenjie Lou, Lei Liu, Lei Bai, Xiaosong Wang
cs.AI

摘要

可重复科学的基础在于精确、逻辑有序且可执行的实验方案。通过自然语言查询自主生成这些方案,可以极大提升实验复现的效率。然而,当前领先的大型语言模型(LLMs)生成的方案往往不完整或不一致,限制了其实用性。为解决这一局限,我们首先引入了SciRecipe,这是一个包含超过12,000条结构化方案的大规模数据集,涵盖27个生物学子领域,并包含理解与问题解决任务。为进一步提升方案生成质量,我们提出了“草图填充”范式,将分析、结构化和表达分离,确保每一步都明确且可验证。与此相辅相成,基于组件的结构化奖励机制评估步骤粒度、动作顺序和语义保真度,使模型优化与实验可靠性保持一致。基于这些组件,我们开发了Thoth,通过分阶段的“知识到行动”过程进行训练,从知识获取到操作推理,最终实现稳健、可执行的方案生成。在多个基准测试中,Thoth持续超越专有和开源LLMs,在步骤对齐、逻辑顺序和语义准确性方面取得显著提升。我们的方法为构建可靠的科学助手铺平了道路,这些助手能够将知识与实验执行相连接。所有数据、代码和模型都将公开发布。
English
The foundation of reproducible science lies in protocols that are precise, logically ordered, and executable. The autonomous generation of these protocols through natural language queries could greatly improve the efficiency of the reproduction process. However, current leading large language models (LLMs) often generate incomplete or inconsistent protocols, limiting their utility. To address this limitation, we first introduce SciRecipe, a large-scale dataset of over 12K structured protocols spanning 27 biological subfields and encompassing both comprehension and problem-solving tasks. To further improve protocol generation, we propose the "Sketch-and-Fill" paradigm, which separates analysis, structuring, and expression to ensure each step is explicit and verifiable. Complementing this, the structured component-based reward mechanism evaluates step granularity, action order, and semantic fidelity, aligning model optimization with experimental reliability. Building on these components, we develop Thoth, trained through a staged Knowledge-to-Action process that progresses from knowledge acquisition to operational reasoning and ultimately to robust, executable protocol generation. Across multiple benchmarks, Thoth consistently surpasses both proprietary and open-source LLMs, achieving significant improvements in step alignment, logical sequencing, and semantic accuracy. Our approach paves the way for reliable scientific assistants that bridge knowledge with experimental execution. All data, code, and models will be released publicly.
PDF22October 22, 2025