SciLitLLM:如何为科学文献理解调整LLM
SciLitLLM: How to Adapt LLMs for Scientific Literature Understanding
August 28, 2024
作者: Sihang Li, Jin Huang, Jiaxi Zhuang, Yaorui Shi, Xiaochen Cai, Mingjun Xu, Xiang Wang, Linfeng Zhang, Guolin Ke, Hengxing Cai
cs.AI
摘要
科学文献理解对于提取目标信息和获取洞察至关重要,从而显著推动科学发现。尽管大型语言模型(LLMs)取得了显著成功,但它们在科学文献理解方面面临挑战,主要是由于(1)缺乏科学知识和(2)对专业科学任务的陌生。
为了开发一种专门用于科学文献理解的LLM,我们提出了一种混合策略,即整合持续预训练(CPT)和监督微调(SFT),以同时注入科学领域知识并增强领域特定任务的指令遵循能力。在这个过程中,我们确定了两个关键挑战:(1)构建高质量的CPT语料库,和(2)生成多样化的SFT指令。我们通过一个细致的流程来解决这些挑战,包括PDF文本提取、解析内容错误校正、质量过滤和合成指令创建。应用这一策略,我们提出了一系列LLMs:SciLitLLM,专门用于科学文献理解。这些模型在科学文献理解基准测试中展现了有希望的性能。
我们的贡献有三个方面:(1)我们提出了一个有效的框架,将CPT和SFT整合起来,使LLMs适应科学文献理解,这也可以轻松地应用于其他领域。 (2)我们提出了一种基于LLM的综合方法,生成多样化和高质量的科学指令,从而形成了一个新的指令集 - SciLitIns,用于在较少代表的科学领域进行监督微调。 (3)SciLitLLM在科学文献理解基准测试中取得了有希望的性能改进。
English
Scientific literature understanding is crucial for extracting targeted
information and garnering insights, thereby significantly advancing scientific
discovery. Despite the remarkable success of Large Language Models (LLMs), they
face challenges in scientific literature understanding, primarily due to (1) a
lack of scientific knowledge and (2) unfamiliarity with specialized scientific
tasks.
To develop an LLM specialized in scientific literature understanding, we
propose a hybrid strategy that integrates continual pre-training (CPT) and
supervised fine-tuning (SFT), to simultaneously infuse scientific domain
knowledge and enhance instruction-following capabilities for domain-specific
tasks.cIn this process, we identify two key challenges: (1) constructing
high-quality CPT corpora, and (2) generating diverse SFT instructions. We
address these challenges through a meticulous pipeline, including PDF text
extraction, parsing content error correction, quality filtering, and synthetic
instruction creation. Applying this strategy, we present a suite of LLMs:
SciLitLLM, specialized in scientific literature understanding. These models
demonstrate promising performance on scientific literature understanding
benchmarks.
Our contributions are threefold: (1) We present an effective framework that
integrates CPT and SFT to adapt LLMs to scientific literature understanding,
which can also be easily adapted to other domains. (2) We propose an LLM-based
synthesis method to generate diverse and high-quality scientific instructions,
resulting in a new instruction set -- SciLitIns -- for supervised fine-tuning
in less-represented scientific domains. (3) SciLitLLM achieves promising
performance improvements on scientific literature understanding benchmarks.Summary
AI-Generated Summary