ChatPaper.aiChatPaper

从宏观到微观:基于视觉语言模型的分子级空间智能评测

From Macro to Micro: Benchmarking Microscopic Spatial Intelligence on Molecules via Vision-Language Models

December 11, 2025
作者: Zongzhao Li, Xiangzhe Kong, Jiahui Su, Zongyang Ma, Mingze Li, Songyou Li, Yuelin Zhang, Yu Rong, Tingyang Xu, Deli Zhao, Wenbing Huang
cs.AI

摘要

本文提出微观空间智能(MiSI)的概念,即感知和推理不可见微观实体空间关系的能力,这种能力是科学发现的基础。为评估视觉语言模型(VLMS)在该领域的潜力,我们构建了系统化基准框架MiSI-Bench。该框架包含超过16.3万个问答对和58.7万张图像,数据源自约4000个分子结构,涵盖九项互补任务,评估能力范围从基础空间变换到复杂关系识别。实验结果表明,当前最先进的VLMS在此基准上的表现显著低于人类水平。然而,经过微调的7B参数模型展现出巨大潜力,在空间变换任务中甚至超越人类,但其在氢键识别等科学基础任务中的薄弱表现表明,要实现科学通用人工智能需整合显式领域知识。数据集已发布于https://huggingface.co/datasets/zongzhao/MiSI-bench。
English
This paper introduces the concept of Microscopic Spatial Intelligence (MiSI), the capability to perceive and reason about the spatial relationships of invisible microscopic entities, which is fundamental to scientific discovery. To assess the potential of Vision-Language Models (VLMs) in this domain, we propose a systematic benchmark framework MiSI-Bench. This framework features over 163,000 question-answer pairs and 587,000 images derived from approximately 4,000 molecular structures, covering nine complementary tasks that evaluate abilities ranging from elementary spatial transformations to complex relational identifications. Experimental results reveal that current state-of-the-art VLMs perform significantly below human level on this benchmark. However, a fine-tuned 7B model demonstrates substantial potential, even surpassing humans in spatial transformation tasks, while its poor performance in scientifically-grounded tasks like hydrogen bond recognition underscores the necessity of integrating explicit domain knowledge for progress toward scientific AGI. The datasets are available at https://huggingface.co/datasets/zongzhao/MiSI-bench.
PDF111December 13, 2025