医学SAM3:面向通用提示驱动医学图像分割的基础模型
Medical SAM3: A Foundation Model for Universal Prompt-Driven Medical Image Segmentation
January 15, 2026
作者: Chongcong Jiang, Tianxingjian Ding, Chuhan Song, Jiachen Tu, Ziyang Yan, Yihua Shao, Zhenyi Wang, Yuzhang Shang, Tianyu Han, Yu Tian
cs.AI
摘要
诸如SAM3等可提示分割基础模型通过交互式和基于概念的提示机制展现了强大的泛化能力。然而,其在医学图像分割中的直接应用仍受限于严重的领域偏移、特权空间提示的缺失以及对复杂解剖结构和体积信息进行推理的需求。本文提出Medical SAM3——一种面向通用提示驱动的医学图像分割基础模型,该模型通过在大规模异构二维和三维医学影像数据集(含配对分割掩码与文本提示)上对SAM3进行全参数微调获得。通过对原始SAM3的系统性分析,我们发现其在医学数据上的性能显著下降,其表面竞争力主要依赖于强几何先验(如基于真实标注的边界框)。这些发现促使我们超越单纯的提示工程,进行完整的模型适配。通过在涵盖10种医学影像模态的33个数据集上微调SAM3的模型参数,Medical SAM3在保持提示驱动灵活性的同时获得了鲁棒的领域特定表征。针对不同器官、影像模态及维度的广泛实验表明,该模型实现了持续且显著的性能提升,尤其在具有语义模糊性、复杂形态学和长程三维上下文特征的挑战性场景中表现突出。我们的研究成果确立了Medical SAM3作为医学影像领域通用文本引导分割基础模型的地位,并凸显了在严重领域偏移下实现鲁棒提示驱动分割时整体模型适配的重要性。代码与模型将发布于https://github.com/AIM-Research-Lab/Medical-SAM3。
English
Promptable segmentation foundation models such as SAM3 have demonstrated strong generalization capabilities through interactive and concept-based prompting. However, their direct applicability to medical image segmentation remains limited by severe domain shifts, the absence of privileged spatial prompts, and the need to reason over complex anatomical and volumetric structures. Here we present Medical SAM3, a foundation model for universal prompt-driven medical image segmentation, obtained by fully fine-tuning SAM3 on large-scale, heterogeneous 2D and 3D medical imaging datasets with paired segmentation masks and text prompts. Through a systematic analysis of vanilla SAM3, we observe that its performance degrades substantially on medical data, with its apparent competitiveness largely relying on strong geometric priors such as ground-truth-derived bounding boxes. These findings motivate full model adaptation beyond prompt engineering alone. By fine-tuning SAM3's model parameters on 33 datasets spanning 10 medical imaging modalities, Medical SAM3 acquires robust domain-specific representations while preserving prompt-driven flexibility. Extensive experiments across organs, imaging modalities, and dimensionalities demonstrate consistent and significant performance gains, particularly in challenging scenarios characterized by semantic ambiguity, complex morphology, and long-range 3D context. Our results establish Medical SAM3 as a universal, text-guided segmentation foundation model for medical imaging and highlight the importance of holistic model adaptation for achieving robust prompt-driven segmentation under severe domain shift. Code and model will be made available at https://github.com/AIM-Research-Lab/Medical-SAM3.