ChatPaper.aiChatPaper

CoD,朝向使用診斷鏈打造可解釋醫療智能體

CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis

July 18, 2024
作者: Junying Chen, Chi Gui, Anningzhe Gao, Ke Ji, Xidong Wang, Xiang Wan, Benyou Wang
cs.AI

摘要

隨著大型語言模型(LLMs)的出現,醫學診斷領域經歷了重大轉變,然而這些模型內部的可解釋性挑戰仍然大多未被解決。本研究引入了診斷鏈(Chain-of-Diagnosis,CoD)以增強基於LLM的醫學診斷的可解釋性。CoD將診斷過程轉化為一個反映醫師思維過程的診斷鏈,提供透明的推理路徑。此外,CoD輸出疾病信心分佈,以確保決策過程的透明度。這種可解釋性使模型診斷可控,有助於通過信心減少熵來識別進行詢問的關鍵症狀。憑藉CoD,我們開發了DiagnosisGPT,能夠診斷9604種疾病。實驗結果表明,DiagnosisGPT在診斷基準上優於其他LLMs。此外,DiagnosisGPT提供可解釋性,同時確保診斷嚴謹性的可控性。
English
The field of medical diagnosis has undergone a significant transformation with the advent of large language models (LLMs), yet the challenges of interpretability within these models remain largely unaddressed. This study introduces Chain-of-Diagnosis (CoD) to enhance the interpretability of LLM-based medical diagnostics. CoD transforms the diagnostic process into a diagnostic chain that mirrors a physician's thought process, providing a transparent reasoning pathway. Additionally, CoD outputs the disease confidence distribution to ensure transparency in decision-making. This interpretability makes model diagnostics controllable and aids in identifying critical symptoms for inquiry through the entropy reduction of confidences. With CoD, we developed DiagnosisGPT, capable of diagnosing 9604 diseases. Experimental results demonstrate that DiagnosisGPT outperforms other LLMs on diagnostic benchmarks. Moreover, DiagnosisGPT provides interpretability while ensuring controllability in diagnostic rigor.

Summary

AI-Generated Summary

PDF574November 28, 2024