AdditiveLLM2:面向增材制造的多模态大语言模型
AdditiveLLM2: A Multi-modal Large Language Model for Additive Manufacturing
March 23, 2026
作者: Peter Pak, Amir Barati Farimani
cs.AI
摘要
本研究推出AdditiveLLM2——一个基于Gemma 3指令调优模型构建的多模态领域自适应大语言模型,其训练使用了约5000万token的小规模数据集。该数据集(AdditiveLLM2-OA)由开放获取的增材制造期刊论文构成,通过数据提取技术服务于领域自适应预训练与视觉指令调优流程。我们采用由已公开资源汇编的增材制造领域专项任务基准(Additive-Manufacturing-Benchmark)对开发模型的各个阶段进行评估。AdditiveLLM2在语言与视觉任务中均展现出卓越能力,在通用增材制造知识问答中准确率超过90%。这种领域自适应预训练与指令调优策略为大型语言模型实现增材制造等专业领域的适配提供了一条可行的技术路径。
English
This work presents AdditiveLLM2 a multi-modal, domain adapted large language model built upon the instruction tuned variant of the Gemma 3 model using a relatively small dataset of around 50 million tokens. The dataset (AdditiveLLM2-OA) consists of open-access additive manufacturing journal articles with data extracted for the domain adaptive pretraining and visual instruction tuning processes. Various stages of the developed model are evaluated with the Additive-Manufacturing-Benchmark which consists of additive manufacturing domain specific tasks compiled published resources. AdditiveLLM2 exhibits proficiency in both language and vision based tasks, achieving accuracies upwards of 90% in general additive manufacturing knowledge. This domain adaptive pretraining and instruction tuning strategy outline an accessible specialization method for large language models to a domain such as additive manufacturing.