ChatPaper.aiChatPaper

MOOSE-Chem2:通过分层搜索探索大语言模型在细粒度科学假设发现中的极限

MOOSE-Chem2: Exploring LLM Limits in Fine-Grained Scientific Hypothesis Discovery via Hierarchical Search

May 25, 2025
作者: Zonglin Yang, Wanhao Liu, Ben Gao, Yujie Liu, Wei Li, Tong Xie, Lidong Bing, Wanli Ouyang, Erik Cambria, Dongzhan Zhou
cs.AI

摘要

大型语言模型(LLMs)在自动化科学假设生成方面展现出潜力,然而现有方法主要产生的是缺乏关键方法论和实验细节的粗粒度假设。我们引入并正式定义了细粒度科学假设发现这一新任务,该任务要求从初步的粗粒度研究方向生成详细且可实验操作的假设。我们将此问题框架化为一个组合优化问题,并探讨在最大限度利用LLMs能力时,其解决该问题的上限。具体而言,我们探索了四个基础性问题:(1)如何最佳地利用LLM的内部启发式方法,以生成其自身基于内部评分认为最有前景的细粒度假设,从而在假设空间上定义一个潜在的奖励景观;(2)此类由LLM判断为更优的假设是否与真实假设表现出更强的对齐性;(3)使用一组能力相近的多样化LLMs来塑造奖励景观,是否比仅使用其中最强LLM的重复实例定义奖励景观能带来更好的结果;(4)一组相同的LLMs是否比单一LLM提供更可靠的奖励景观。针对这些问题,我们提出了一种分层搜索方法,该方法逐步提出并将细节整合到假设中,从一般概念推进到具体的实验配置。我们展示了这一分层过程能够平滑奖励景观,并实现更有效的优化。基于最新化学文献中专家标注的细粒度假设新基准的实证评估表明,我们的方法始终优于强基线模型。
English
Large language models (LLMs) have shown promise in automating scientific hypothesis generation, yet existing approaches primarily yield coarse-grained hypotheses lacking critical methodological and experimental details. We introduce and formally define the novel task of fine-grained scientific hypothesis discovery, which entails generating detailed, experimentally actionable hypotheses from coarse initial research directions. We frame this as a combinatorial optimization problem and investigate the upper limits of LLMs' capacity to solve it when maximally leveraged. Specifically, we explore four foundational questions: (1) how to best harness an LLM's internal heuristics to formulate the fine-grained hypothesis it itself would judge as the most promising among all the possible hypotheses it might generate, based on its own internal scoring-thus defining a latent reward landscape over the hypothesis space; (2) whether such LLM-judged better hypotheses exhibit stronger alignment with ground-truth hypotheses; (3) whether shaping the reward landscape using an ensemble of diverse LLMs of similar capacity yields better outcomes than defining it with repeated instances of the strongest LLM among them; and (4) whether an ensemble of identical LLMs provides a more reliable reward landscape than a single LLM. To address these questions, we propose a hierarchical search method that incrementally proposes and integrates details into the hypothesis, progressing from general concepts to specific experimental configurations. We show that this hierarchical process smooths the reward landscape and enables more effective optimization. Empirical evaluations on a new benchmark of expert-annotated fine-grained hypotheses from recent chemistry literature show that our method consistently outperforms strong baselines.

Summary

AI-Generated Summary

PDF232May 27, 2025