ChatPaper.aiChatPaper

預訓練僅限文本的Transformer中的多模態神經元

Multimodal Neurons in Pretrained Text-Only Transformers

August 3, 2023
作者: Sarah Schwettmann, Neil Chowdhury, Antonio Torralba
cs.AI

摘要

語言模型展現了非凡的泛化能力,將在一種模態中學習的表示推廣到其他模態的下游任務。我們是否能追溯這種能力到個別神經元?我們研究了一個凍結的文本轉換器,通過使用自監督視覺編碼器和在圖像到文本任務上學習的單一線性投影來增強視覺。投影層的輸出不能立即解碼為描述圖像內容的語言;相反,我們發現模態之間的轉換發生在轉換器的更深處。我們介紹了一種識別「多模態神經元」的程序,將視覺表示轉換為相應文本的神經元,並解碼它們注入模型剩餘流的概念。通過一系列實驗,我們展示了多模態神經元在各種輸入上操作特定的視覺概念,對圖像標題生成具有系統性的因果影響。
English
Language models demonstrate remarkable capacity to generalize representations learned in one modality to downstream tasks in other modalities. Can we trace this ability to individual neurons? We study the case where a frozen text transformer is augmented with vision using a self-supervised visual encoder and a single linear projection learned on an image-to-text task. Outputs of the projection layer are not immediately decodable into language describing image content; instead, we find that translation between modalities occurs deeper within the transformer. We introduce a procedure for identifying "multimodal neurons" that convert visual representations into corresponding text, and decoding the concepts they inject into the model's residual stream. In a series of experiments, we show that multimodal neurons operate on specific visual concepts across inputs, and have a systematic causal effect on image captioning.
PDF160December 15, 2024