ChatPaper.aiChatPaper

PMT:基于冻结视觉编码器的图像与视频分割通用掩码Transformer

PMT: Plain Mask Transformer for Image and Video Segmentation with Frozen Vision Encoders

March 26, 2026
作者: Niccolò Cavagnero, Narges Norouzi, Gijs Dubbelman, Daan de Geus
cs.AI

摘要

大规模预训练的视觉基础模型(VFMs)使得单个冻结编码器能够同时服务于多个下游任务。近期基于VFM的纯编码器图像与视频分割模型(如EoMT和VidEoMT)在实现竞争性精度的同时保持了极低延迟,但这些方法需要对编码器进行微调,牺牲了VFM在大规模部署中极具实用价值的多任务编码器共享特性。为兼顾纯编码器架构的简洁高效与VFM特征的冻结特性,我们提出Plain Mask Decoder(PMD)——一种基于Transformer的快速分割解码器,可直接处理冻结的VFM特征。由此构建的Plain Mask Transformer(PMT)在保持编码器表征不变且可共享的前提下,延续了纯编码器设计的架构简洁性与低延迟优势。该设计可无缝应用于图像与视频分割任务,继承了纯编码器框架的通用性。在标准图像分割基准测试中,PMT在达到冻结编码器最优性能的同时,推理速度提升约3倍;对于视频分割任务,其性能甚至可与全微调方法相媲美,同时比当前最优的冻结编码器模型快达8倍。代码地址:https://github.com/tue-mps/pmt。
English
Vision Foundation Models (VFMs) pre-trained at scale enable a single frozen encoder to serve multiple downstream tasks simultaneously. Recent VFM-based encoder-only models for image and video segmentation, such as EoMT and VidEoMT, achieve competitive accuracy with remarkably low latency, yet they require finetuning the encoder, sacrificing the multi-task encoder sharing that makes VFMs practically attractive for large-scale deployment. To reconcile encoder-only simplicity and speed with frozen VFM features, we propose the Plain Mask Decoder (PMD), a fast Transformer-based segmentation decoder that operates on top of frozen VFM features. The resulting model, the Plain Mask Transformer (PMT), preserves the architectural simplicity and low latency of encoder-only designs while keeping the encoder representation unchanged and shareable. The design seamlessly applies to both image and video segmentation, inheriting the generality of the encoder-only framework. On standard image segmentation benchmarks, PMT matches the frozen-encoder state of the art while running up to ~3x faster. For video segmentation, it even performs on par with fully finetuned methods, while being up to 8x faster than state-of-the-art frozen-encoder models. Code: https://github.com/tue-mps/pmt.
PDF11March 28, 2026