在醫療保健領域搭建語言障礙的橋樑:一項關於阿拉伯語LLM的研究。
Bridging Language Barriers in Healthcare: A Study on Arabic LLMs
January 16, 2025
作者: Nada Saadi, Tathagata Raha, Clément Christophe, Marco AF Pimentel, Ronnie Rajan, Praveen K Kanithi
cs.AI
摘要
本文探討開發大型語言模型(LLMs)在多語言理解和醫學知識方面的挑戰。我們證明僅簡單地將醫學數據翻譯並不能保證在目標語言上對臨床任務有較強的表現。我們的實驗顯示,在不同醫學任務中,訓練數據中的最佳語言組合差異顯著。我們發現,具有精心校準語言比例的更大模型在本地語言臨床任務上實現了更優異的表現。此外,我們的結果表明,僅依賴微調可能不是將新語言知識融入LLMs的最有效方法。相反,數據和計算密集型的預訓練方法仍然可能是在多語言醫學環境中實現最佳性能所必需的。這些發現為為不同語言社區構建有效和包容性醫學人工智能系統提供了寶貴的指導。
English
This paper investigates the challenges of developing large language models
(LLMs) proficient in both multilingual understanding and medical knowledge. We
demonstrate that simply translating medical data does not guarantee strong
performance on clinical tasks in the target language. Our experiments reveal
that the optimal language mix in training data varies significantly across
different medical tasks. We find that larger models with carefully calibrated
language ratios achieve superior performance on native-language clinical tasks.
Furthermore, our results suggest that relying solely on fine-tuning may not be
the most effective approach for incorporating new language knowledge into LLMs.
Instead, data and computationally intensive pretraining methods may still be
necessary to achieve optimal performance in multilingual medical settings.
These findings provide valuable guidance for building effective and inclusive
medical AI systems for diverse linguistic communities.Summary
AI-Generated Summary