探究视觉基础模型对三维感知的能力
Probing the 3D Awareness of Visual Foundation Models
April 12, 2024
作者: Mohamed El Banani, Amit Raj, Kevis-Kokitsi Maninis, Abhishek Kar, Yuanzhen Li, Michael Rubinstein, Deqing Sun, Leonidas Guibas, Justin Johnson, Varun Jampani
cs.AI
摘要
最近大规模预训练的进展产生了具有强大能力的视觉基础模型。这些最新模型不仅可以推广到任意图像进行训练任务,它们的中间表示对于检测和分割等其他视觉任务也是有用的。鉴于这样的模型可以对2D中的对象进行分类、描绘和定位,我们想知道它们是否也能表示它们的3D结构?在这项工作中,我们分析了视觉基础模型的3D意识。我们认为3D意识意味着表示(1)编码场景的3D结构和(2)在不同视角下一致地表示表面。我们使用特定任务的探针和冻结特征上的零-shot推理程序进行了一系列实验。我们的实验揭示了当前模型的几个局限性。我们的代码和分析可在https://github.com/mbanani/probe3d 找到。
English
Recent advances in large-scale pretraining have yielded visual foundation
models with strong capabilities. Not only can recent models generalize to
arbitrary images for their training task, their intermediate representations
are useful for other visual tasks such as detection and segmentation. Given
that such models can classify, delineate, and localize objects in 2D, we ask
whether they also represent their 3D structure? In this work, we analyze the 3D
awareness of visual foundation models. We posit that 3D awareness implies that
representations (1) encode the 3D structure of the scene and (2) consistently
represent the surface across views. We conduct a series of experiments using
task-specific probes and zero-shot inference procedures on frozen features. Our
experiments reveal several limitations of the current models. Our code and
analysis can be found at https://github.com/mbanani/probe3d.Summary
AI-Generated Summary