ChatPaper.aiChatPaper

椰子:现代化 COCO 分割

COCONut: Modernizing COCO Segmentation

April 12, 2024
作者: Xueqing Deng, Qihang Yu, Peng Wang, Xiaohui Shen, Liang-Chieh Chen
cs.AI

摘要

在近几十年中,视觉领域取得了显著进展,部分归功于数据集基准的进步。值得注意的是,已建立的 COCO 基准推动了现代检测和分割系统的发展。然而,COCO 分割基准在过去十年中改进较慢。最初为物体实例提供粗糙的多边形注释,逐渐加入了用于区域的粗糙超像素注释,随后经启发式地合并为全景分割注释。这些由不同组评定者执行的注释不仅导致了粗糙的分割蒙版,还存在着分割类型之间的不一致性。在本研究中,我们对 COCO 分割注释进行了全面重新评估。通过提高注释质量并扩展数据集,涵盖了 383K 张图像,超过 5.18M 个全景蒙版,我们引入了 COCONut,即 COCO 下一代通用分割数据集。COCONut 通过精心制作高质量蒙版,在语义、实例和全景分割之间协调分割注释,为所有分割任务建立了强大的基准。据我们所知,COCONut 是首个大规模通用分割数据集,经人类评定者验证。我们预计 COCONut 的发布将显著有助于社区评估新型神经网络的进展能力。
English
In recent decades, the vision community has witnessed remarkable progress in visual recognition, partially owing to advancements in dataset benchmarks. Notably, the established COCO benchmark has propelled the development of modern detection and segmentation systems. However, the COCO segmentation benchmark has seen comparatively slow improvement over the last decade. Originally equipped with coarse polygon annotations for thing instances, it gradually incorporated coarse superpixel annotations for stuff regions, which were subsequently heuristically amalgamated to yield panoptic segmentation annotations. These annotations, executed by different groups of raters, have resulted not only in coarse segmentation masks but also in inconsistencies between segmentation types. In this study, we undertake a comprehensive reevaluation of the COCO segmentation annotations. By enhancing the annotation quality and expanding the dataset to encompass 383K images with more than 5.18M panoptic masks, we introduce COCONut, the COCO Next Universal segmenTation dataset. COCONut harmonizes segmentation annotations across semantic, instance, and panoptic segmentation with meticulously crafted high-quality masks, and establishes a robust benchmark for all segmentation tasks. To our knowledge, COCONut stands as the inaugural large-scale universal segmentation dataset, verified by human raters. We anticipate that the release of COCONut will significantly contribute to the community's ability to assess the progress of novel neural networks.
PDF316December 15, 2024