将大型语言模型提炼用于生物医学知识提取:以药物不良事件为案例研究
Distilling Large Language Models for Biomedical Knowledge Extraction: A Case Study on Adverse Drug Events
July 12, 2023
作者: Yu Gu, Sheng Zhang, Naoto Usuyama, Yonas Woldesenbet, Cliff Wong, Praneeth Sanapathi, Mu Wei, Naveen Valluri, Erika Strandberg, Tristan Naumann, Hoifung Poon
cs.AI
摘要
大型语言模型(LLMs),如GPT-4,已经展示出在包括健康应用在内的广泛任务中的显著能力。在本文中,我们研究了LLMs如何用于扩展生物医学知识整理。我们发现,虽然LLMs已经在构建生物医学文本方面具有相当的能力,但通过自监督学习将其精炼为一个特定任务的学生模型,可以获得比开箱即用的LLMs更大的收益,同时还具有成本、效率和白盒模型访问等额外优势。
我们对不良药物事件(ADE)提取进行了案例研究,这是一个改善护理的重要领域。在标准ADE提取评估中,一个经过GPT-3.5精炼的PubMedBERT模型在不使用任何标记数据的情况下达到了与监督式最先进模型相当的准确性。尽管体积小了1000多倍,但精炼模型在F1指标上比其教师GPT-3.5高出6个绝对点,比GPT-4高出5个绝对点。
对精炼模型选择(例如PubMedBERT vs BioGPT)和ADE提取架构的消融研究为生物医学知识提取的最佳实践提供了启示。通过精炼还获得了其他标准生物医学知识提取任务的类似收益,如基因-疾病关联和受保护健康信息,进一步展示了这种方法的潜力。
English
Large language models (LLMs), such as GPT-4, have demonstrated remarkable
capabilities across a wide range of tasks, including health applications. In
this paper, we study how LLMs can be used to scale biomedical knowledge
curation. We find that while LLMs already possess decent competency in
structuring biomedical text, by distillation into a task-specific student model
through self-supervised learning, substantial gains can be attained over
out-of-box LLMs, with additional advantages such as cost, efficiency, and
white-box model access.
We conduct a case study on adverse drug event (ADE) extraction, which is an
important area for improving care. On standard ADE extraction evaluation, a
GPT-3.5 distilled PubMedBERT model attained comparable accuracy as supervised
state-of-the-art models without using any labeled data. Despite being over
1,000 times smaller, the distilled model outperformed its teacher GPT-3.5 by
over 6 absolute points in F1 and GPT-4 by over 5 absolute points.
Ablation studies on distillation model choice (e.g., PubMedBERT vs BioGPT)
and ADE extraction architecture shed light on best practice for biomedical
knowledge extraction. Similar gains were attained by distillation for other
standard biomedical knowledge extraction tasks such as gene-disease
associations and protected health information, further illustrating the promise
of this approach.