ChatPaper.aiChatPaper

PhysGym:在受控先验条件下评估大语言模型的交互式物理发现能力

PhysGym: Benchmarking LLMs in Interactive Physics Discovery with Controlled Priors

July 21, 2025
作者: Yimeng Chen, Piotr Piȩkos, Mateusz Ostaszewski, Firas Laakom, Jürgen Schmidhuber
cs.AI

摘要

评估基于大型语言模型的智能体在科学发现方面的能力,特别是它们如何应对不同环境复杂性并利用先验知识,目前尚缺乏专门的基准测试。为填补这一空白,我们推出了PhysGym,这是一个新颖的基准测试套件和模拟平台,旨在严格评估LLM在交互式物理环境中的科学推理能力。PhysGym的核心贡献在于其对提供给智能体的先验知识水平的精细控制。这使得研究人员能够沿着问题复杂性和先验知识水平等维度剖析智能体表现。该基准测试包含一系列交互式模拟,智能体必须在其中主动探索环境,在约束条件下顺序收集数据,并形成关于潜在物理定律的假设。PhysGym提供了标准化的评估协议和指标,用于评估假设准确性和模型保真度。我们通过展示基线LLM的结果,证明了该基准测试在区分基于不同先验知识和任务复杂性的能力方面的实用性。
English
Evaluating the scientific discovery capabilities of large language model based agents, particularly how they cope with varying environmental complexity and utilize prior knowledge, requires specialized benchmarks currently lacking in the landscape. To address this gap, we introduce PhysGym, a novel benchmark suite and simulation platform for rigorously assessing LLM-based scientific reasoning in interactive physics environments. PhysGym's primary contribution lies in its sophisticated control over the level of prior knowledge provided to the agent. This allows researchers to dissect agent performance along axes including the complexity of the problem and the prior knowledge levels. The benchmark comprises a suite of interactive simulations, where agents must actively probe environments, gather data sequentially under constraints and formulate hypotheses about underlying physical laws. PhysGym provides standardized evaluation protocols and metrics for assessing hypothesis accuracy and model fidelity. We demonstrate the benchmark's utility by presenting results from baseline LLMs, showcasing its ability to differentiate capabilities based on varying priors and task complexity.
PDF32July 22, 2025