UniRef++:在空间和时间空间中对每个参考对象进行分割。
UniRef++: Segment Every Reference Object in Spatial and Temporal Spaces
December 25, 2023
作者: Jiannan Wu, Yi Jiang, Bin Yan, Huchuan Lu, Zehuan Yuan, Ping Luo
cs.AI
摘要
基于参考的对象分割任务,即指代图像分割(RIS)、少样本图像分割(FSS)、指代视频对象分割(RVOS)和视频对象分割(VOS),旨在利用语言或标注掩模作为参考来分割特定对象。尽管各自领域取得了显著进展,但当前方法是专门针对特定任务设计和发展的,朝着不同方向发展,这阻碍了这些任务的多任务能力的激活。在这项工作中,我们结束当前的碎片化局面,提出UniRef++来统一这四个基于参考的对象分割任务,采用单一架构。我们方法的核心是提出的UniFusion模块,用于执行多路融合,以处理不同任务相对于它们指定的参考的情况。然后采用统一的Transformer架构来实现实例级别的分割。通过统一的设计,UniRef++可以在广泛的基准上进行联合训练,并可以通过指定相应的参考在运行时灵活完成多个任务。我们在各种基准上评估了我们的统一模型。大量实验结果表明,我们提出的UniRef++在RIS和RVOS上实现了最先进的性能,并且在FSS和VOS上与参数共享网络具有竞争力。此外,我们展示了提出的UniFusion模块可以轻松地整合到当前先进的基础模型SAM中,并通过参数高效的微调获得令人满意的结果。代码和模型可在https://github.com/FoundationVision/UniRef找到。
English
The reference-based object segmentation tasks, namely referring image
segmentation (RIS), few-shot image segmentation (FSS), referring video object
segmentation (RVOS), and video object segmentation (VOS), aim to segment a
specific object by utilizing either language or annotated masks as references.
Despite significant progress in each respective field, current methods are
task-specifically designed and developed in different directions, which hinders
the activation of multi-task capabilities for these tasks. In this work, we end
the current fragmented situation and propose UniRef++ to unify the four
reference-based object segmentation tasks with a single architecture. At the
heart of our approach is the proposed UniFusion module which performs
multiway-fusion for handling different tasks with respect to their specified
references. And a unified Transformer architecture is then adopted for
achieving instance-level segmentation. With the unified designs, UniRef++ can
be jointly trained on a broad range of benchmarks and can flexibly complete
multiple tasks at run-time by specifying the corresponding references. We
evaluate our unified models on various benchmarks. Extensive experimental
results indicate that our proposed UniRef++ achieves state-of-the-art
performance on RIS and RVOS, and performs competitively on FSS and VOS with a
parameter-shared network. Moreover, we showcase that the proposed UniFusion
module could be easily incorporated into the current advanced foundation model
SAM and obtain satisfactory results with parameter-efficient finetuning. Codes
and models are available at https://github.com/FoundationVision/UniRef.