OmniRad:面向多任务医学影像分析的放射学基础模型
OmniRad: A Radiological Foundation Model for Multi-Task Medical Image Analysis
February 4, 2026
作者: Luca Zedda, Andrea Loddo, Cecilia Di Ruberto
cs.AI
摘要
放射学分析正日益受益于预训练的视觉表征,这些表征能够支持跨影像模态的异构下游任务。本研究推出OmniRad——一个基于放射学原理设计、通过120万张医学图像自监督预训练的放射学基础模型,其设计理念强调表征复用与跨任务迁移能力。我们在多种下游适配机制下评估预训练编码器,包括采用冻结主干网络的轻量级任务适配器以及端到端全参数微调的分类任务,从而同步评估表征质量与任务特定性能。OmniRad在涵盖多模态分类与分割的公共基准测试套件中接受评估。在MedMNISTv2数据集中,OmniRad相较竞争性基础模型将分类F1分数最高提升2.05%;在密集预测任务中,使用冻结表征时于六组MedSegBench数据集上实现了平均Dice系数的提升。定性分析与潜在空间可视化表明该模型具有更优的特征聚类能力及模态相关性分离特性。
English
Radiological analysis increasingly benefits from pretrained visual representations that can support heterogeneous downstream tasks across imaging modalities. In this work, we introduce OmniRad, a self-supervised radiological foundation model pretrained on 1.2 million medical images, designed with radiology-inspired principles emphasizing representation reuse and cross-task transferability. We evaluate the pretrained encoder under multiple downstream adaptation regimes, including lightweight task-specific adapters with a frozen backbone as well as full end-to-end fine-tuning for classification, allowing us to assess both representation quality and task-specific performance. OmniRad is evaluated on a broad suite of public benchmarks spanning classification and segmentation across multiple modalities. On the MedMNISTv2 collection, OmniRad improves classification F1 by up to 2.05% over competing foundation models. For dense prediction, OmniRad attains mean Dice score improvements across six MedSegBench datasets when using frozen representations. Qualitative analyses and latent-space visualizations suggest improved feature clustering and modality-related separation.