ChatPaper.aiChatPaper

Dr. LLaMA:透過生成式資料擴增改善特定領域問答中的小型語言模型

Dr. LLaMA: Improving Small Language Models in Domain-Specific QA via Generative Data Augmentation

May 12, 2023
作者: Zhen Guo, Peiqi Wang, Yanwei Wang, Shangdi Yu
cs.AI

摘要

大型語言模型(LLMs)在自然語言處理方面取得了顯著進展,但隨著規模的擴大,它們面臨著計算成本和效率方面的挑戰,尤其是在特定領域任務中。另一方面,小型語言模型(SLMs)在這些任務中常因容量和訓練數據有限而遇到困難。本文介紹了一種名為Dr. LLaMA的方法,通過使用LLMs進行生成式數據擴充來改善SLMs,在醫學問答任務和PubMedQA數據集上進行研究。我們的研究結果表明,LLMs能夠有效地改進和豐富現有的問答對,從而在微調後使規模小得多的模型在特定領域的問答數據集上表現得更好。本研究突顯了使用LLMs進行特定領域問答的挑戰,並提出了潛在的研究方向來應對這些限制,最終旨在為專業應用創造更高效、更有能力的模型。我們還提供了我們的代碼,供有興趣的研究人員使用。
English
Large Language Models (LLMs) have made significant strides in natural language processing but face challenges in terms of computational expense and inefficiency as they grow in size, especially in domain-specific tasks. Small Language Models (SLMs), on the other hand, often struggle in these tasks due to limited capacity and training data. In this paper, we introduce Dr. LLaMA, a method for improving SLMs through generative data augmentation using LLMs, focusing on medical question-answering tasks and the PubMedQA dataset. Our findings indicate that LLMs effectively refine and diversify existing question-answer pairs, resulting in improved performance of a much smaller model on domain-specific QA datasets after fine-tuning. This study highlights the challenges of using LLMs for domain-specific question answering and suggests potential research directions to address these limitations, ultimately aiming to create more efficient and capable models for specialized applications. We have also made our code available for interested researchers
PDF21December 15, 2024