创新者-VL:面向科学发现的多模态大语言模型
Innovator-VL: A Multimodal Large Language Model for Scientific Discovery
January 27, 2026
作者: Zichen Wen, Boxue Yang, Shuang Chen, Yaojie Zhang, Yuhang Han, Junlong Ke, Cong Wang, Yicheng Fu, Jiawang Zhao, Jiangchao Yao, Xi Fang, Zhen Wang, Henxing Cai, Lin Yao, Zhifeng Gao, Yanhui Hong, Nang Yuan, Yixuan Li, Guojiang Zhao, Haoyi Tao, Nan Wang, Han Lyu, Guolin Ke, Ning Liao, Xiaoxing Wang, Kai Chen, Zhiyu Li, Feiyu Xiong, Sihan Hu, Kun Chen, Yanfeng Wang, Weinan E, Linfeng Zhang, Linfeng Zhang
cs.AI
摘要
我们推出创新者-VL,这是一款面向科学领域的多模态大语言模型,旨在提升跨学科的科学理解与推理能力,同时保持通用视觉任务的优异表现。与依赖海量领域预训练和不透明技术链的主流趋势不同,我们的研究表明:通过规范化的训练设计和透明化方法论,能以显著降低的数据量培育出强大的科学智能。(i)首先,我们提供完全透明、端到端可复现的训练流程,涵盖数据收集、清洗、预处理、监督微调、强化学习及评估环节,并附有详细的优化方案,为学术界的系统性拓展提供便利;(ii)其次,创新者-VL展现出卓越的数据效率,仅用不足五百万经筛选的样本即可在多项科学任务中取得竞争优势,且无需大规模预训练。这一结果证明,通过规范化的数据筛选而非盲目扩大规模,同样能实现高效推理;(iii)再次,该模型表现出强大的泛化能力,在通用视觉、多模态推理及科学基准测试中均达到领先水平,表明科学对齐能力可融入统一模型而不影响通用性能。我们的实践表明,即使不依赖大规模数据,也能构建高效、可复现、高性能的科学多模态模型,为未来研究奠定实践基础。
English
We present Innovator-VL, a scientific multimodal large language model designed to advance understanding and reasoning across diverse scientific domains while maintaining excellent performance on general vision tasks. Contrary to the trend of relying on massive domain-specific pretraining and opaque pipelines, our work demonstrates that principled training design and transparent methodology can yield strong scientific intelligence with substantially reduced data requirements. (i) First, we provide a fully transparent, end-to-end reproducible training pipeline, covering data collection, cleaning, preprocessing, supervised fine-tuning, reinforcement learning, and evaluation, along with detailed optimization recipes. This facilitates systematic extension by the community. (ii) Second, Innovator-VL exhibits remarkable data efficiency, achieving competitive performance on various scientific tasks using fewer than five million curated samples without large-scale pretraining. These results highlight that effective reasoning can be achieved through principled data selection rather than indiscriminate scaling. (iii) Third, Innovator-VL demonstrates strong generalization, achieving competitive performance on general vision, multimodal reasoning, and scientific benchmarks. This indicates that scientific alignment can be integrated into a unified model without compromising general-purpose capabilities. Our practices suggest that efficient, reproducible, and high-performing scientific multimodal models can be built even without large-scale data, providing a practical foundation for future research.