对话式图像分割:基于可扩展监督的抽象概念定位
Conversational Image Segmentation: Grounding Abstract Concepts with Scalable Supervision
February 13, 2026
作者: Aadarsh Sahoo, Georgia Gkioxari
cs.AI
摘要
对话式图像分割将抽象的意图驱动概念转化为像素级精确掩码。现有指代性图像定位研究主要关注类别与空间查询(如“最左侧的苹果”),而忽视了功能与物理推理(如“哪里可以安全存放刀具?”)。针对这一空白,我们提出对话式图像分割(CIS)及涵盖实体、空间关系、意图、功能属性、安全性与物理推理的基准数据集ConverSeg。同时推出融合强分割先验与语言理解的ConverSeg-Net模型,以及无需人工标注即可生成提示-掩码对的AI驱动数据引擎。实验表明,当前语言引导的分割模型难以胜任CIS任务,而基于本数据引擎训练的ConverSeg-Net在ConverSeg基准上实现显著提升,并在现有语言引导分割基准中保持强劲性能。项目页面:https://glab-caltech.github.io/converseg/
English
Conversational image segmentation grounds abstract, intent-driven concepts into pixel-accurate masks. Prior work on referring image grounding focuses on categorical and spatial queries (e.g., "left-most apple") and overlooks functional and physical reasoning (e.g., "where can I safely store the knife?"). We address this gap and introduce Conversational Image Segmentation (CIS) and ConverSeg, a benchmark spanning entities, spatial relations, intent, affordances, functions, safety, and physical reasoning. We also present ConverSeg-Net, which fuses strong segmentation priors with language understanding, and an AI-powered data engine that generates prompt-mask pairs without human supervision. We show that current language-guided segmentation models are inadequate for CIS, while ConverSeg-Net trained on our data engine achieves significant gains on ConverSeg and maintains strong performance on existing language-guided segmentation benchmarks. Project webpage: https://glab-caltech.github.io/converseg/