SAMed-2:选择性记忆增强型医学图像分割通用模型
SAMed-2: Selective Memory Enhanced Medical Segment Anything Model
July 4, 2025
作者: Zhiling Yan, Sifan Song, Dingjie Song, Yiwei Li, Rong Zhou, Weixiang Sun, Zhennong Chen, Sekeun Kim, Hui Ren, Tianming Liu, Quanzheng Li, Xiang Li, Lifang He, Lichao Sun
cs.AI
摘要
近期“分割一切”的研究通过从大规模数据中学习展现出潜力,但直接将此类模型应用于医学影像仍面临挑战,这源于医学数据的复杂性、标注噪声以及跨多种模态和解剖结构的持续学习需求。在本研究中,我们提出了SAMed-2,一种基于SAM-2架构的新型医学图像分割基础模型。具体而言,我们在图像编码器中引入了一个时序适配器以捕捉图像间的关联,并采用了一种置信度驱动的记忆机制来存储高确定性特征以供后续检索。这种基于记忆的策略有效应对了大规模医学数据集中普遍存在的噪声问题,并在遇到新任务或模态时减轻了灾难性遗忘。为了训练和评估SAMed-2,我们构建了MedBank-100k,一个涵盖七种成像模态和21项医学分割任务的综合数据集。我们在内部基准测试及10个外部数据集上的实验表明,在多任务场景下,SAMed-2相较于现有最先进的基线方法表现出更优的性能。代码已发布于:https://github.com/ZhilingYan/Medical-SAM-Bench。
English
Recent "segment anything" efforts show promise by learning from large-scale
data, but adapting such models directly to medical images remains challenging
due to the complexity of medical data, noisy annotations, and continual
learning requirements across diverse modalities and anatomical structures. In
this work, we propose SAMed-2, a new foundation model for medical image
segmentation built upon the SAM-2 architecture. Specifically, we introduce a
temporal adapter into the image encoder to capture image correlations and a
confidence-driven memory mechanism to store high-certainty features for later
retrieval. This memory-based strategy counters the pervasive noise in
large-scale medical datasets and mitigates catastrophic forgetting when
encountering new tasks or modalities. To train and evaluate SAMed-2, we curate
MedBank-100k, a comprehensive dataset spanning seven imaging modalities and 21
medical segmentation tasks. Our experiments on both internal benchmarks and 10
external datasets demonstrate superior performance over state-of-the-art
baselines in multi-task scenarios. The code is available at:
https://github.com/ZhilingYan/Medical-SAM-Bench.