ChatPaper.aiChatPaper

VISTA-PATH:面向计算病理学的病理图像分割与定量分析交互式基础模型

VISTA-PATH: An interactive foundation model for pathology image segmentation and quantitative analysis in computational pathology

January 23, 2026
作者: Peixian Liang, Songhao Li, Shunsuke Koga, Yutong Li, Zahra Alipour, Yucheng Tang, Daguang Xu, Zhi Huang
cs.AI

摘要

精准的病理图像语义分割对于定量组织分析及下游临床建模至关重要。现有的分割基础模型通过大规模预训练提升了泛化能力,但由于将分割视为静态视觉预测任务,仍与病理学需求存在偏差。本文提出VISTA-PATH——一个支持交互的类别感知病理分割基础模型,其设计目标包括解析异质性结构、整合专家反馈,并生成对临床诊断具有直接意义的像素级分割结果。该模型通过联合建模视觉上下文、语义组织描述及可选的专家空间提示,实现跨异质性病理图像的精确实时多类分割。为支撑此范式,我们构建了VISTA-PATH数据集,该大规模病理分割语料库涵盖9个器官、93种组织类型,包含超160万图像-掩码-文本三元组。在大量留出测试集与外部基准评估中,VISTA-PATH均显著优于现有分割基础模型。值得注意的是,该模型支持动态人机协同优化,可将稀疏的斑块级边界框标注反馈传播至全玻片分割。最终我们证明,VISTA-PATH产生的高保真类别感知分割结果可作为计算病理学的优选模型:通过提出的肿瘤互作评分(TIS)提升组织微环境分析效能,该指标与患者生存期呈现显著相关性。综上,VISTA-PATH将病理图像分割从静态预测提升为基于临床实践的交互式表征,为数字病理学奠定了新基础。源代码与演示见https://github.com/zhihuanglab/VISTA-PATH。
English
Accurate semantic segmentation for histopathology image is crucial for quantitative tissue analysis and downstream clinical modeling. Recent segmentation foundation models have improved generalization through large-scale pretraining, yet remain poorly aligned with pathology because they treat segmentation as a static visual prediction task. Here we present VISTA-PATH, an interactive, class-aware pathology segmentation foundation model designed to resolve heterogeneous structures, incorporate expert feedback, and produce pixel-level segmentation that are directly meaningful for clinical interpretation. VISTA-PATH jointly conditions segmentation on visual context, semantic tissue descriptions, and optional expert-provided spatial prompts, enabling precise multi-class segmentation across heterogeneous pathology images. To support this paradigm, we curate VISTA-PATH Data, a large-scale pathology segmentation corpus comprising over 1.6 million image-mask-text triplets spanning 9 organs and 93 tissue classes. Across extensive held-out and external benchmarks, VISTA-PATH consistently outperforms existing segmentation foundation models. Importantly, VISTA-PATH supports dynamic human-in-the-loop refinement by propagating sparse, patch-level bounding-box annotation feedback into whole-slide segmentation. Finally, we show that the high-fidelity, class-aware segmentation produced by VISTA-PATH is a preferred model for computational pathology. It improve tissue microenvironment analysis through proposed Tumor Interaction Score (TIS), which exhibits strong and significant associations with patient survival. Together, these results establish VISTA-PATH as a foundation model that elevates pathology image segmentation from a static prediction to an interactive and clinically grounded representation for digital pathology. Source code and demo can be found at https://github.com/zhihuanglab/VISTA-PATH.
PDF22January 27, 2026