CRISP-SAM2:具備跨模態交互與語義提示的SAM2模型,用於多器官分割
CRISP-SAM2: SAM2 with Cross-Modal Interaction and Semantic Prompting for Multi-Organ Segmentation
June 29, 2025
作者: Xinlei Yu, Chanmiao Wang, Hui Jin, Ahmed Elazab, Gangyong Jia, Xiang Wan, Changqing Zou, Ruiquan Ge
cs.AI
摘要
多器官醫學分割是醫學影像處理中的關鍵組成部分,對於醫生做出準確診斷和制定有效治療方案至關重要。儘管該領域已取得顯著進展,但當前的多器官分割模型仍常存在細節不準確、依賴幾何提示以及空間信息丟失等問題。針對這些挑戰,我們引入了一種名為CRISP-SAM2的新模型,該模型基於SAM2,結合了跨模態交互與語義提示技術,代表了一種以器官文本描述為指導的多器官醫學分割新方法。我們的方法首先利用漸進式跨注意力交互機制,將視覺與文本輸入轉化為跨模態的上下文語義。這些語義隨後被注入圖像編碼器,以增強對視覺信息的細緻理解。為消除對幾何提示的依賴,我們採用語義提示策略,替換原有的提示編碼器,以提升對挑戰性目標的感知能力。此外,還應用了基於相似度排序的記憶自我更新策略和掩碼精煉過程,進一步適應醫學影像並增強局部細節。在七個公開數據集上進行的對比實驗表明,CRISP-SAM2優於現有模型。廣泛的分析也證明了我們方法的有效性,從而確認了其卓越性能,尤其是在解決上述限制方面。我們的代碼已公開於:https://github.com/YU-deep/CRISP\_SAM2.git。
English
Multi-organ medical segmentation is a crucial component of medical image
processing, essential for doctors to make accurate diagnoses and develop
effective treatment plans. Despite significant progress in this field, current
multi-organ segmentation models often suffer from inaccurate details,
dependence on geometric prompts and loss of spatial information. Addressing
these challenges, we introduce a novel model named CRISP-SAM2 with CRoss-modal
Interaction and Semantic Prompting based on SAM2. This model represents a
promising approach to multi-organ medical segmentation guided by textual
descriptions of organs. Our method begins by converting visual and textual
inputs into cross-modal contextualized semantics using a progressive
cross-attention interaction mechanism. These semantics are then injected into
the image encoder to enhance the detailed understanding of visual information.
To eliminate reliance on geometric prompts, we use a semantic prompting
strategy, replacing the original prompt encoder to sharpen the perception of
challenging targets. In addition, a similarity-sorting self-updating strategy
for memory and a mask-refining process is applied to further adapt to medical
imaging and enhance localized details. Comparative experiments conducted on
seven public datasets indicate that CRISP-SAM2 outperforms existing models.
Extensive analysis also demonstrates the effectiveness of our method, thereby
confirming its superior performance, especially in addressing the limitations
mentioned earlier. Our code is available at:
https://github.com/YU-deep/CRISP\_SAM2.git.