Alpha-CLIP:一个专注于您想要的地方的CLIP模型
Alpha-CLIP: A CLIP Model Focusing on Wherever You Want
December 6, 2023
作者: Zeyi Sun, Ye Fang, Tong Wu, Pan Zhang, Yuhang Zang, Shu Kong, Yuanjun Xiong, Dahua Lin, Jiaqi Wang
cs.AI
摘要
对比语言-图像预训练(CLIP)在跨领域任务中从图像中提取有价值的内容信息起着至关重要的作用。它对齐文本和视觉模态以理解整个图像,包括所有细节,甚至那些与特定任务无关的细节。然而,为了更好地理解和控制编辑图像,关注特定感兴趣区域变得至关重要,这些区域可以由人类或感知模型指示为点、蒙版或框。为了满足这些要求,我们引入了Alpha-CLIP,这是CLIP的增强版本,具有辅助的 alpha 通道,用于建议关注的区域,并通过构建数百万个 RGBA 区域-文本对进行微调。Alpha-CLIP 不仅保留了 CLIP 的视觉识别能力,还能精确控制图像内容的强调。它在各种任务中展现出有效性,包括但不限于开放世界识别、多模态大型语言模型和有条件的 2D/3D 生成。它具有成为图像相关任务的多功能工具的潜力。
English
Contrastive Language-Image Pre-training (CLIP) plays an essential role in
extracting valuable content information from images across diverse tasks. It
aligns textual and visual modalities to comprehend the entire image, including
all the details, even those irrelevant to specific tasks. However, for a finer
understanding and controlled editing of images, it becomes crucial to focus on
specific regions of interest, which can be indicated as points, masks, or boxes
by humans or perception models. To fulfill the requirements, we introduce
Alpha-CLIP, an enhanced version of CLIP with an auxiliary alpha channel to
suggest attentive regions and fine-tuned with constructed millions of RGBA
region-text pairs. Alpha-CLIP not only preserves the visual recognition ability
of CLIP but also enables precise control over the emphasis of image contents.
It demonstrates effectiveness in various tasks, including but not limited to
open-world recognition, multimodal large language models, and conditional 2D /
3D generation. It has a strong potential to serve as a versatile tool for
image-related tasks.