ChatPaper.aiChatPaper

可操控视觉表征

Steerable Visual Representations

April 2, 2026
作者: Jona Ruthardt, Manu Gaur, Deva Ramanan, Makarand Tapaswi, Yuki M. Asano
cs.AI

摘要

诸如DINOv2和MAE等预训练视觉Transformer(ViT)能够提供适用于检索、分类与分割等多种下游任务的通用图像特征。然而,此类表征往往聚焦于图像中最显著的视觉线索,无法主动关注到次要的感兴趣概念。相比之下,多模态大语言模型虽可通过文本提示进行引导,但其生成的表征易偏向语言中心化,在通用视觉任务中的有效性会减弱。为此,我们提出可引导视觉表征这一新型视觉表征类别,其全局与局部特征均可通过自然语言进行定向引导。现有视觉-语言模型(如CLIP)多在编码后融合文本与视觉特征(后期融合),而我们将文本通过轻量级交叉注意力直接注入视觉编码器的各层级(早期融合)。我们建立了衡量表征可引导性的基准测试,并证明所提出的可引导视觉特征能在保持底层表征质量的同时聚焦于图像中任意目标对象。该方法在异常检测和个性化对象区分任务中达到或超越了专用方案的性能,并展现出对分布外任务的零样本泛化能力。
English
Pretrained Vision Transformers (ViTs) such as DINOv2 and MAE provide generic image features that can be applied to a variety of downstream tasks such as retrieval, classification, and segmentation. However, such representations tend to focus on the most salient visual cues in the image, with no way to direct them toward less prominent concepts of interest. In contrast, Multimodal LLMs can be guided with textual prompts, but the resulting representations tend to be language-centric and lose their effectiveness for generic visual tasks. To address this, we introduce Steerable Visual Representations, a new class of visual representations, whose global and local features can be steered with natural language. While most vision-language models (e.g., CLIP) fuse text with visual features after encoding (late fusion), we inject text directly into the layers of the visual encoder (early fusion) via lightweight cross-attention. We introduce benchmarks for measuring representational steerability, and demonstrate that our steerable visual features can focus on any desired objects in an image while preserving the underlying representation quality. Our method also matches or outperforms dedicated approaches on anomaly detection and personalized object discrimination, exhibiting zero-shot generalization to out-of-distribution tasks.
PDF281April 4, 2026