ChatPaper.aiChatPaper

SITTA:用於圖像標題生成的語義圖像-文本對齊

SITTA: A Semantic Image-Text Alignment for Image Captioning

July 10, 2023
作者: Fabian Paischer, Thomas Adler, Markus Hofmarcher, Sepp Hochreiter
cs.AI

摘要

圖像的文本和語義理解對於生成適當的標題至關重要。該理解需要檢測物體、建模它們之間的關係、評估場景的語義,最後,在語言空間中表示提取的知識。為了實現豐富的語言能力並確保良好的圖像-語言映射,預訓練語言模型(LMs)被條件化為預訓練的多模型(圖像-文本)模型,允許圖像輸入。這需要將多模型模型的圖像表示與生成式LM的語言表示對齊。然而,如何最好地將多模型模型的視覺編碼器檢測到的語義轉移給LM並不清楚。我們介紹了兩種構建線性映射的新方法,成功地在兩個預訓練模型的嵌入空間之間轉移語義。第一種是通過標記對應將多模型語言編碼器的嵌入空間與預訓練LM的嵌入空間對齊。後者利用包含圖像-文本對的額外數據直接從視覺空間構建映射到語言空間。使用我們的語義映射,我們為沒有梯度信息的LM解鎖了圖像標題。通過使用不同來源的數據,我們在MS-COCO和Flickr30k數據集上實現了強大的標題性能。即使在數據有限的情況下,我們的方法在某種程度上超過了其他零樣本和甚至微調競爭對手的性能。我們的消融研究表明,即使是規模僅為250M參數的LM,也可以使用我們的語義映射生成不錯的標題。我們的方法使得對於計算資源受限的機構來說,圖像標題更易於實現。
English
Textual and semantic comprehension of images is essential for generating proper captions. The comprehension requires detection of objects, modeling of relations between them, an assessment of the semantics of the scene and, finally, representing the extracted knowledge in a language space. To achieve rich language capabilities while ensuring good image-language mappings, pretrained language models (LMs) were conditioned on pretrained multi-modal (image-text) models that allow for image inputs. This requires an alignment of the image representation of the multi-modal model with the language representations of a generative LM. However, it is not clear how to best transfer semantics detected by the vision encoder of the multi-modal model to the LM. We introduce two novel ways of constructing a linear mapping that successfully transfers semantics between the embedding spaces of the two pretrained models. The first aligns the embedding space of the multi-modal language encoder with the embedding space of the pretrained LM via token correspondences. The latter leverages additional data that consists of image-text pairs to construct the mapping directly from vision to language space. Using our semantic mappings, we unlock image captioning for LMs without access to gradient information. By using different sources of data we achieve strong captioning performance on MS-COCO and Flickr30k datasets. Even in the face of limited data, our method partly exceeds the performance of other zero-shot and even finetuned competitors. Our ablation studies show that even LMs at a scale of merely 250M parameters can generate decent captions employing our semantic mappings. Our approach makes image captioning more accessible for institutions with restricted computational resources.
PDF60December 15, 2024