MedHallu:大型語言模型醫學幻覺檢測之全面基準測試
MedHallu: A Comprehensive Benchmark for Detecting Medical Hallucinations in Large Language Models
February 20, 2025
作者: Shrey Pandit, Jiawei Xu, Junyuan Hong, Zhangyang Wang, Tianlong Chen, Kaidi Xu, Ying Ding
cs.AI
摘要
大型語言模型(LLMs)的進步及其在醫學問答中的日益應用,亟需對其可靠性進行嚴格評估。一個關鍵挑戰在於幻覺現象,即模型生成看似合理但實際上錯誤的輸出。在醫學領域,這對患者安全和臨床決策構成了嚴重風險。為此,我們引入了MedHallu,這是首個專門針對醫學幻覺檢測設計的基準。MedHallu包含從PubMedQA中提取的10,000個高質量問答對,並通過受控流程系統生成幻覺答案。我們的實驗表明,包括GPT-4o、Llama-3.1及醫學微調的UltraMedical在內的頂尖LLMs,在這一二元幻覺檢測任務上表現不佳,最佳模型在檢測“困難”類別幻覺時的F1分數僅為0.625。通過雙向蘊含聚類,我們發現更難檢測的幻覺在語義上更接近真實答案。實驗還表明,融入領域特定知識並引入“不確定”作為回答類別之一,相較於基線,精確度和F1分數提升了高達38%。
English
Advancements in Large Language Models (LLMs) and their increasing use in
medical question-answering necessitate rigorous evaluation of their
reliability. A critical challenge lies in hallucination, where models generate
plausible yet factually incorrect outputs. In the medical domain, this poses
serious risks to patient safety and clinical decision-making. To address this,
we introduce MedHallu, the first benchmark specifically designed for medical
hallucination detection. MedHallu comprises 10,000 high-quality question-answer
pairs derived from PubMedQA, with hallucinated answers systematically generated
through a controlled pipeline. Our experiments show that state-of-the-art LLMs,
including GPT-4o, Llama-3.1, and the medically fine-tuned UltraMedical,
struggle with this binary hallucination detection task, with the best model
achieving an F1 score as low as 0.625 for detecting "hard" category
hallucinations. Using bidirectional entailment clustering, we show that
harder-to-detect hallucinations are semantically closer to ground truth.
Through experiments, we also show incorporating domain-specific knowledge and
introducing a "not sure" category as one of the answer categories improves the
precision and F1 scores by up to 38% relative to baselines.Summary
AI-Generated Summary