CRISP-SAM2:具备跨模态交互与语义提示功能的SAM2模型,用于多器官分割
CRISP-SAM2: SAM2 with Cross-Modal Interaction and Semantic Prompting for Multi-Organ Segmentation
June 29, 2025
作者: Xinlei Yu, Chanmiao Wang, Hui Jin, Ahmed Elazab, Gangyong Jia, Xiang Wan, Changqing Zou, Ruiquan Ge
cs.AI
摘要
多器官医学分割是医学图像处理中的关键环节,对于医生做出准确诊断和制定有效治疗方案至关重要。尽管该领域已取得显著进展,但当前的多器官分割模型常面临细节不精确、依赖几何提示以及空间信息丢失等问题。针对这些挑战,我们提出了一种基于SAM2的新型模型——CRISP-SAM2,它结合了跨模态交互与语义提示,为基于器官文本描述的多器官医学分割提供了一种有前景的解决方案。我们的方法首先通过渐进式跨注意力交互机制,将视觉与文本输入转化为跨模态的上下文语义,随后将这些语义注入图像编码器,以增强对视觉信息的细节理解。为了消除对几何提示的依赖,我们采用语义提示策略,替代原有的提示编码器,以提升对复杂目标的感知能力。此外,还应用了记忆的相似度排序自更新策略和掩码精炼过程,进一步适应医学影像并强化局部细节。在七个公开数据集上的对比实验表明,CRISP-SAM2超越了现有模型。深入分析也验证了我们方法的有效性,特别是在解决上述局限性方面展现了其卓越性能。我们的代码已公开于:https://github.com/YU-deep/CRISP\_SAM2.git。
English
Multi-organ medical segmentation is a crucial component of medical image
processing, essential for doctors to make accurate diagnoses and develop
effective treatment plans. Despite significant progress in this field, current
multi-organ segmentation models often suffer from inaccurate details,
dependence on geometric prompts and loss of spatial information. Addressing
these challenges, we introduce a novel model named CRISP-SAM2 with CRoss-modal
Interaction and Semantic Prompting based on SAM2. This model represents a
promising approach to multi-organ medical segmentation guided by textual
descriptions of organs. Our method begins by converting visual and textual
inputs into cross-modal contextualized semantics using a progressive
cross-attention interaction mechanism. These semantics are then injected into
the image encoder to enhance the detailed understanding of visual information.
To eliminate reliance on geometric prompts, we use a semantic prompting
strategy, replacing the original prompt encoder to sharpen the perception of
challenging targets. In addition, a similarity-sorting self-updating strategy
for memory and a mask-refining process is applied to further adapt to medical
imaging and enhance localized details. Comparative experiments conducted on
seven public datasets indicate that CRISP-SAM2 outperforms existing models.
Extensive analysis also demonstrates the effectiveness of our method, thereby
confirming its superior performance, especially in addressing the limitations
mentioned earlier. Our code is available at:
https://github.com/YU-deep/CRISP\_SAM2.git.