ChatPaper.aiChatPaper

InstructDiffusion:用于视觉任务的通用建模界面

InstructDiffusion: A Generalist Modeling Interface for Vision Tasks

September 7, 2023
作者: Zigang Geng, Binxin Yang, Tiankai Hang, Chen Li, Shuyang Gu, Ting Zhang, Jianmin Bao, Zheng Zhang, Han Hu, Dong Chen, Baining Guo
cs.AI

摘要

我们提出了InstructDiffusion,这是一个统一且通用的框架,用于将计算机视觉任务与人类指令对齐。与现有方法不同,这些方法整合了先验知识并为每个视觉任务预定义了输出空间(例如类别和坐标),我们将各种视觉任务转化为一个直观的人类图像处理过程,其输出空间是一个灵活且交互式的像素空间。具体而言,该模型建立在扩散过程之上,并经过训练,根据用户指令预测像素,比如用红色圈出男人的左肩或者在左侧的汽车上应用蓝色遮罩。InstructDiffusion可以处理各种视觉任务,包括理解任务(如分割和关键点检测)和生成任务(如编辑和增强)。它甚至展示了处理未见任务的能力,并在新颖数据集上胜过先前的方法。这代表着通向视觉任务通用建模界面的重要一步,推动了计算机视觉领域人工通用智能的发展。
English
We present InstructDiffusion, a unifying and generic framework for aligning computer vision tasks with human instructions. Unlike existing approaches that integrate prior knowledge and pre-define the output space (e.g., categories and coordinates) for each vision task, we cast diverse vision tasks into a human-intuitive image-manipulating process whose output space is a flexible and interactive pixel space. Concretely, the model is built upon the diffusion process and is trained to predict pixels according to user instructions, such as encircling the man's left shoulder in red or applying a blue mask to the left car. InstructDiffusion could handle a variety of vision tasks, including understanding tasks (such as segmentation and keypoint detection) and generative tasks (such as editing and enhancement). It even exhibits the ability to handle unseen tasks and outperforms prior methods on novel datasets. This represents a significant step towards a generalist modeling interface for vision tasks, advancing artificial general intelligence in the field of computer vision.
PDF140December 15, 2024