DINOv3是否树立了医学视觉新标杆?
Does DINOv3 Set a New Medical Vision Standard?
September 8, 2025
作者: Che Liu, Yinda Chen, Haoyuan Shi, Jinpeng Lu, Bailiang Jian, Jiazhen Pan, Linghan Cai, Jiayi Wang, Yundi Zhang, Jun Li, Cosmin I. Bercea, Cheng Ouyang, Chen Chen, Zhiwei Xiong, Benedikt Wiestler, Christian Wachinger, Daniel Rueckert, Wenjia Bai, Rossella Arcucci
cs.AI
摘要
大规模视觉基础模型的出现,通过在多样化的自然图像上进行预训练,标志着计算机视觉领域的一次范式转变。然而,这些前沿视觉基础模型在专业领域(如医学影像)中的效能迁移仍是一个悬而未决的问题。本报告探讨了DINOv3——一种在密集预测任务中展现出强大能力的最先进自监督视觉Transformer(ViT)——是否能够直接作为医学视觉任务的强大统一编码器,而无需进行领域特定的预训练。为此,我们在包括2D/3D分类和分割在内的多种医学影像模态上对DINOv3进行了基准测试,并通过调整模型大小和输入图像分辨率系统地分析了其可扩展性。我们的研究结果表明,DINOv3展现了令人印象深刻的性能,并确立了一个新的强大基线。值得注意的是,尽管仅基于自然图像训练,它在多项任务上甚至超越了如BiomedCLIP和CT-Net等医学专用基础模型。然而,我们也识别出明显的局限性:在需要深度领域专业化的场景中,如全切片病理图像(WSIs)、电子显微镜(EM)和正电子发射断层扫描(PET),模型的特征表现会下降。此外,我们观察到DINOv3在医学领域并不总是遵循缩放定律;性能并不随模型增大或特征分辨率提高而稳定提升,不同任务间呈现出多样化的缩放行为。最终,我们的工作确立了DINOv3作为一个强有力的基线,其强大的视觉特征可作为多个复杂医学任务的稳健先验。这为未来研究开辟了有前景的方向,例如利用其特征在3D重建中强制执行多视图一致性。
English
The advent of large-scale vision foundation models, pre-trained on diverse
natural images, has marked a paradigm shift in computer vision. However, how
the frontier vision foundation models' efficacies transfer to specialized
domains remains such as medical imaging remains an open question. This report
investigates whether DINOv3, a state-of-the-art self-supervised vision
transformer (ViT) that features strong capability in dense prediction tasks,
can directly serve as a powerful, unified encoder for medical vision tasks
without domain-specific pre-training. To answer this, we benchmark DINOv3
across common medical vision tasks, including 2D/3D classification and
segmentation on a wide range of medical imaging modalities. We systematically
analyze its scalability by varying model sizes and input image resolutions. Our
findings reveal that DINOv3 shows impressive performance and establishes a
formidable new baseline. Remarkably, it can even outperform medical-specific
foundation models like BiomedCLIP and CT-Net on several tasks, despite being
trained solely on natural images. However, we identify clear limitations: The
model's features degrade in scenarios requiring deep domain specialization,
such as in Whole-Slide Pathological Images (WSIs), Electron Microscopy (EM),
and Positron Emission Tomography (PET). Furthermore, we observe that DINOv3
does not consistently obey scaling law in the medical domain; performance does
not reliably increase with larger models or finer feature resolutions, showing
diverse scaling behaviors across tasks. Ultimately, our work establishes DINOv3
as a strong baseline, whose powerful visual features can serve as a robust
prior for multiple complex medical tasks. This opens promising future
directions, such as leveraging its features to enforce multiview consistency in
3D reconstruction.