OmniRad:面向多任务医学影像分析的放射学基础模型
OmniRad: A Radiological Foundation Model for Multi-Task Medical Image Analysis
February 4, 2026
作者: Luca Zedda, Andrea Loddo, Cecilia Di Ruberto
cs.AI
摘要
放射学分析正日益受益于预训练视觉表征,这种表征能够支持跨影像模态的异构下游任务。本研究推出OmniRad——一个基于放射学原理设计的自监督放射学基础模型,该模型在120万张医学图像上完成预训练,其设计理念强调表征复用与跨任务可迁移性。我们通过多种下游适配机制评估预训练编码器,包括采用冻结主干网络搭配轻量级任务适配器,以及针对分类任务的全端到端微调,从而综合评估表征质量与任务特定性能。在涵盖多模态分类与分割的公共基准测试中,OmniRad表现出色:在MedMNISTv2数据集上,其分类F1分数较同类基础模型最高提升2.05%;在六组MedSegBench数据集的密集预测任务中,使用冻结表征时平均Dice分数实现全面提升。定性分析与隐空间可视化结果表明,该模型具有更优的特征聚类能力与模态相关性分离特性。
English
Radiological analysis increasingly benefits from pretrained visual representations that can support heterogeneous downstream tasks across imaging modalities. In this work, we introduce OmniRad, a self-supervised radiological foundation model pretrained on 1.2 million medical images, designed with radiology-inspired principles emphasizing representation reuse and cross-task transferability. We evaluate the pretrained encoder under multiple downstream adaptation regimes, including lightweight task-specific adapters with a frozen backbone as well as full end-to-end fine-tuning for classification, allowing us to assess both representation quality and task-specific performance. OmniRad is evaluated on a broad suite of public benchmarks spanning classification and segmentation across multiple modalities. On the MedMNISTv2 collection, OmniRad improves classification F1 by up to 2.05% over competing foundation models. For dense prediction, OmniRad attains mean Dice score improvements across six MedSegBench datasets when using frozen representations. Qualitative analyses and latent-space visualizations suggest improved feature clustering and modality-related separation.