在多模式語言模型中,粗略對應引發了對3D時空的理解
Coarse Correspondence Elicit 3D Spacetime Understanding in Multimodal Language Model
August 1, 2024
作者: Benlin Liu, Yuhao Dong, Yiqin Wang, Yongming Rao, Yansong Tang, Wei-Chiu Ma, Ranjay Krishna
cs.AI
摘要
多模式語言模型(MLLMs)越來越多地應用於現實環境中,這要求它們能夠解釋3D空間並理解時間動態。儘管具有潛力,但當前社群中的頂尖模型仍然在充分理解空間和時間維度方面存在不足。我們引入了粗略對應(Coarse Correspondence),這是一種簡單、無需訓練、有效且通用的視覺提示方法,可引發多模式LLMs對3D和時間的理解。我們的方法使用輕量級跟踪模型在視頻的幀之間或在圖像視點集之間找到物體對應。它選擇最常見的物體實例,並在圖像中用帶有獨特ID的標記器可視化它們。通過這種簡單方法,我們在包括ScanQA(+20.5\%)和OpenEQA的子集(+9.7\%)在內的3D理解基準測試中取得了最先進的結果,以及在長格式視頻基準測試中,如EgoSchema(+6.0\%)。我們還整理了一個小型診斷數據集,以評估MLLMs是否能夠從除了相機視點之外的描述視點推理空間。再次,粗略對應提高了空間透視能力,但我們強調MLLMs在這項任務上存在困難。總的來說,我們展示了我們的簡單提示方法可以顯著幫助需要3D或時間推理的下游任務。
English
Multimodal language models (MLLMs) are increasingly being implemented in
real-world environments, necessitating their ability to interpret 3D spaces and
comprehend temporal dynamics. Despite their potential, current top models
within our community still fall short in adequately understanding spatial and
temporal dimensions. We introduce Coarse Correspondence, a simple,
training-free, effective, and general-purpose visual prompting method to elicit
3D and temporal understanding in multimodal LLMs. Our method uses a lightweight
tracking model to find object correspondences between frames in a video or
between sets of image viewpoints. It selects the most frequent object instances
and visualizes them with markers with unique IDs in the image. With this simple
approach, we achieve state-of-the-art results on 3D understanding benchmarks
including ScanQA (+20.5\%) and a subset of OpenEQA (+9.7\%), and on long-form
video benchmarks such as EgoSchema (+6.0\%). We also curate a small diagnostic
dataset to evaluate whether MLLMs can reason about space from a described
viewpoint other than the camera viewpoint. Again, Coarse Correspondence
improves spatial perspective-taking abilities but we highlight that MLLMs
struggle with this task. Together, we demonstrate that our simple prompting
method can significantly aid downstream tasks that require 3D or temporal
reasoning.Summary
AI-Generated Summary