ChatPaper.aiChatPaper

SITTA:用于图像字幕生成的语义图像-文本对齐

SITTA: A Semantic Image-Text Alignment for Image Captioning

July 10, 2023
作者: Fabian Paischer, Thomas Adler, Markus Hofmarcher, Sepp Hochreiter
cs.AI

摘要

图像的文本和语义理解对于生成正确的标题至关重要。这种理解需要检测对象、建模它们之间的关系、评估场景的语义,并最终在语言空间中表示提取的知识。为了实现丰富的语言能力并确保良好的图像-语言映射,预训练语言模型(LMs)被调节为预训练的多模态(图像-文本)模型,允许图像输入。这需要将多模态模型的图像表示与生成式LM的语言表示进行对齐。然而,如何最好地将多模态模型的视觉编码器检测到的语义转移到LM尚不清楚。我们介绍了两种构建线性映射的新方法,成功地在两个预训练模型的嵌入空间之间转移语义。第一种方法通过令牌对应将多模态语言编码器的嵌入空间与预训练LM的嵌入空间对齐。后者利用包含图像-文本对的额外数据,直接从视觉空间构建映射到语言空间。利用我们的语义映射,我们为没有梯度信息的LM解锁了图像标题生成。通过使用不同来源的数据,我们在MS-COCO和Flickr30k数据集上实现了强大的标题性能。即使在数据有限的情况下,我们的方法在某种程度上超过了其他零样本甚至微调竞争对手的性能。我们的消融研究表明,即使是仅有2.5亿参数规模的LM也可以利用我们的语义映射生成体面的标题。我们的方法使受限制的计算资源的机构更容易进行图像标题生成。
English
Textual and semantic comprehension of images is essential for generating proper captions. The comprehension requires detection of objects, modeling of relations between them, an assessment of the semantics of the scene and, finally, representing the extracted knowledge in a language space. To achieve rich language capabilities while ensuring good image-language mappings, pretrained language models (LMs) were conditioned on pretrained multi-modal (image-text) models that allow for image inputs. This requires an alignment of the image representation of the multi-modal model with the language representations of a generative LM. However, it is not clear how to best transfer semantics detected by the vision encoder of the multi-modal model to the LM. We introduce two novel ways of constructing a linear mapping that successfully transfers semantics between the embedding spaces of the two pretrained models. The first aligns the embedding space of the multi-modal language encoder with the embedding space of the pretrained LM via token correspondences. The latter leverages additional data that consists of image-text pairs to construct the mapping directly from vision to language space. Using our semantic mappings, we unlock image captioning for LMs without access to gradient information. By using different sources of data we achieve strong captioning performance on MS-COCO and Flickr30k datasets. Even in the face of limited data, our method partly exceeds the performance of other zero-shot and even finetuned competitors. Our ablation studies show that even LMs at a scale of merely 250M parameters can generate decent captions employing our semantic mappings. Our approach makes image captioning more accessible for institutions with restricted computational resources.
PDF60December 15, 2024