MedVista3D:面向减少3D CT疾病检测、理解与报告诊断错误的视觉-语言建模
MedVista3D: Vision-Language Modeling for Reducing Diagnostic Errors in 3D CT Disease Detection, Understanding and Reporting
September 4, 2025
作者: Yuheng Li, Yenho Chen, Yuxiang Lai, Jike Zhong, Vanessa Wildman, Xiaofeng Yang
cs.AI
摘要
放射診斷錯誤——包括漏讀錯誤、注意力盲區及溝通失誤——在臨床實踐中依然普遍存在。這些問題往往源於局部異常的遺漏、全局語境的限制以及報告語言的多樣性。在三維影像中,這些挑戰更為突出,因為臨床醫生需要審查每次掃描的數百張切片。解決這些問題需要具備精確局部檢測、全局體積層面推理及語義一致的自然語言報告系統。然而,現有的三維視覺-語言模型無法同時滿足這三項需求,缺乏對空間推理的局部-全局理解,並且在處理未經整理的放射報告的多樣性和噪聲方面存在困難。我們提出了MedVista3D,這是一個用於三維CT分析的多尺度語義增強視覺-語言預訓練框架。為了實現疾病檢測與整體解釋的聯合,MedVista3D在全體積語境下進行局部與全局的圖像-文本對齊,以實現細粒度表示學習。針對報告的多樣性,我們應用語言模型重寫並引入放射語義匹配庫,以實現語義感知的對齊。MedVista3D在零樣本疾病分類、報告檢索及醫學視覺問答方面達到了最先進的性能,同時在器官分割和預後預測上表現良好。代碼和數據集將被公開。
English
Radiologic diagnostic errors-under-reading errors, inattentional blindness,
and communication failures-remain prevalent in clinical practice. These issues
often stem from missed localized abnormalities, limited global context, and
variability in report language. These challenges are amplified in 3D imaging,
where clinicians must examine hundreds of slices per scan. Addressing them
requires systems with precise localized detection, global volume-level
reasoning, and semantically consistent natural language reporting. However,
existing 3D vision-language models are unable to meet all three needs jointly,
lacking local-global understanding for spatial reasoning and struggling with
the variability and noise of uncurated radiology reports. We present
MedVista3D, a multi-scale semantic-enriched vision-language pretraining
framework for 3D CT analysis. To enable joint disease detection and holistic
interpretation, MedVista3D performs local and global image-text alignment for
fine-grained representation learning within full-volume context. To address
report variability, we apply language model rewrites and introduce a Radiology
Semantic Matching Bank for semantics-aware alignment. MedVista3D achieves
state-of-the-art performance on zero-shot disease classification, report
retrieval, and medical visual question answering, while transferring well to
organ segmentation and prognosis prediction. Code and datasets will be
released.