视觉素描板:作为多模态语言模型的视觉思维链的素描
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
June 13, 2024
作者: Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, Ranjay Krishna
cs.AI
摘要
人类绘图以促进推理:在解决几何问题时,我们会画辅助线;在地图推理时,我们会标记和圈出重要部分;我们使用草图来扩展思路,减轻我们有限容量的工作记忆。然而,当前的多模态语言模型(LMs)中缺乏这样的行为。目前的思维链和工具使用范式仅将文本作为中间推理步骤。在这项工作中,我们介绍了Sketchpad,这是一个为多模态LMs提供视觉素描板和绘图工具的框架。LM根据自己绘制的视觉工件进行规划和推理。与以往使用文本到图像模型使LMs能够绘图的工作不同,Sketchpad使LMs能够使用线条、方框、标记等进行绘图,这更接近人类的素描方式,更有助于推理。Sketchpad还可以在绘图过程中使用专业的视觉模型(例如,使用目标检测模型绘制边界框,使用分割模型绘制蒙版),以进一步增强视觉感知和推理能力。我们在广泛的数学任务(包括几何、函数、图形和国际象棋)以及复杂的视觉推理任务上进行了实验。Sketchpad在所有任务上都显著提高了性能,相比没有绘图的强基准模型,数学任务平均提升了12.7%,视觉任务提升了8.6%。带有Sketchpad的GPT-4o在所有任务上均创造了新的最先进水平,包括V*Bench(80.3%)、BLINK空间推理(83.9%)和视觉对应(80.8%)。所有代码和数据均在https://visualsketchpad.github.io/。
English
Humans draw to facilitate reasoning: we draw auxiliary lines when solving
geometry problems; we mark and circle when reasoning on maps; we use sketches
to amplify our ideas and relieve our limited-capacity working memory. However,
such actions are missing in current multimodal language models (LMs). Current
chain-of-thought and tool-use paradigms only use text as intermediate reasoning
steps. In this work, we introduce Sketchpad, a framework that gives multimodal
LMs a visual sketchpad and tools to draw on the sketchpad. The LM conducts
planning and reasoning according to the visual artifacts it has drawn.
Different from prior work, which uses text-to-image models to enable LMs to
draw, Sketchpad enables LMs to draw with lines, boxes, marks, etc., which is
closer to human sketching and better facilitates reasoning. Sketchpad can also
use specialist vision models during the sketching process (e.g., draw bounding
boxes with object detection models, draw masks with segmentation models), to
further enhance visual perception and reasoning. We experiment with a wide
range of math tasks (including geometry, functions, graphs, and chess) and
complex visual reasoning tasks. Sketchpad substantially improves performance on
all tasks over strong base models with no sketching, yielding an average gain
of 12.7% on math tasks, and 8.6% on vision tasks. GPT-4o with Sketchpad sets a
new state of the art on all tasks, including V*Bench (80.3%), BLINK spatial
reasoning (83.9%), and visual correspondence (80.8%). All codes and data are in
https://visualsketchpad.github.io/.