ChatPaper.aiChatPaper

Transformer 中工作記憶中符號表示的複雜度與任務的複雜度相關。

Complexity of Symbolic Representation in Working Memory of Transformer Correlates with the Complexity of a Task

June 20, 2024
作者: Alsu Sagirova, Mikhail Burtsev
cs.AI

摘要

儘管Transformer廣泛應用於自然語言處理任務,特別是機器翻譯,但它們缺乏明確的記憶來存儲處理文本的關鍵概念。本文探討了添加到Transformer模型解碼器的符號工作記憶內容的特性。這種工作記憶提升了模型在機器翻譯任務中的預測質量,並作為重要信息的神經符號表示,有助於模型進行正確翻譯。對記憶內容的研究顯示,翻譯文本關鍵詞存儲在工作記憶中,指向記憶內容與處理文本的相關性。此外,存儲在記憶中的標記和詞性的多樣性與機器翻譯任務的語料庫複雜性相關。
English
Even though Transformers are extensively used for Natural Language Processing tasks, especially for machine translation, they lack an explicit memory to store key concepts of processed texts. This paper explores the properties of the content of symbolic working memory added to the Transformer model decoder. Such working memory enhances the quality of model predictions in machine translation task and works as a neural-symbolic representation of information that is important for the model to make correct translations. The study of memory content revealed that translated text keywords are stored in the working memory, pointing to the relevance of memory content to the processed text. Also, the diversity of tokens and parts of speech stored in memory correlates with the complexity of the corpora for machine translation task.

Summary

AI-Generated Summary

PDF215November 29, 2024